WO2022199416A1 - Camera module, terminal device, and imaging method - Google Patents
Camera module, terminal device, and imaging method Download PDFInfo
- Publication number
- WO2022199416A1 WO2022199416A1 PCT/CN2022/080768 CN2022080768W WO2022199416A1 WO 2022199416 A1 WO2022199416 A1 WO 2022199416A1 CN 2022080768 W CN2022080768 W CN 2022080768W WO 2022199416 A1 WO2022199416 A1 WO 2022199416A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- light
- image sensing
- information
- camera module
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 40
- 230000003287 optical effect Effects 0.000 claims abstract description 173
- 238000000034 method Methods 0.000 claims abstract description 25
- 230000001902 propagating effect Effects 0.000 claims abstract description 24
- 230000010287 polarization Effects 0.000 claims description 54
- 238000012545 processing Methods 0.000 claims description 43
- 230000004313 glare Effects 0.000 claims description 31
- 238000006243 chemical reaction Methods 0.000 claims description 20
- 230000000644 propagated effect Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 5
- 238000004519 manufacturing process Methods 0.000 claims description 4
- 238000005070 sampling Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000012544 monitoring process Methods 0.000 abstract description 5
- 238000004891 communication Methods 0.000 abstract description 2
- 230000007774 longterm Effects 0.000 abstract description 2
- 230000035945 sensitivity Effects 0.000 description 33
- 238000010586 diagram Methods 0.000 description 31
- 230000004927 fusion Effects 0.000 description 12
- 238000003860 storage Methods 0.000 description 8
- 230000000712 assembly Effects 0.000 description 6
- 238000000429 assembly Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 239000010408 film Substances 0.000 description 4
- 239000011521 glass Substances 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000004807 localization Effects 0.000 description 3
- 229910044991 metal oxide Inorganic materials 0.000 description 3
- 150000004706 metal oxides Chemical class 0.000 description 3
- 230000002265 prevention Effects 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 238000002834 transmittance Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 239000011248 coating agent Substances 0.000 description 2
- 238000000576 coating method Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000004297 night vision Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 239000010409 thin film Substances 0.000 description 2
- 230000003313 weakening effect Effects 0.000 description 2
- 101100248200 Arabidopsis thaliana RGGB gene Proteins 0.000 description 1
- 201000004569 Blindness Diseases 0.000 description 1
- 206010052143 Ocular discomfort Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 208000018769 loss of vision Diseases 0.000 description 1
- 231100000864 loss of vision Toxicity 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000005499 meniscus Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 239000011347 resin Substances 0.000 description 1
- 229920005989 resin Polymers 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
- 230000004393 visual impairment Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
Definitions
- the present application relates to the technical field of camera modules, and in particular, to a camera module, a terminal device and an imaging method.
- the images captured by the on-board camera can reflect the environment around the vehicle, thus providing necessary information for safe driving.
- the dynamic range in the vehicle-mounted camera is a relatively important functional parameter.
- the dynamic range refers to the interval included in the brightness values of the brightest and darkest objects that can normally display details in the same picture captured by the vehicle-mounted camera. Therefore, how to obtain high-quality images with a wide dynamic range is a technical problem that needs to be solved urgently in the vehicle camera.
- the present application provides a camera module, a terminal device and an imaging method, which are used to improve the dynamic range of the camera module.
- the present application provides a camera module
- the camera module may include a first optical lens assembly, a light splitting assembly, a first image sensing assembly and a second image sensing assembly, the first image sensing assembly and the first image sensing assembly.
- the photosensitive areas of the two image sensing assemblies are the same, and the resolution of the first image sensing assembly is greater than the resolution of the second image sensing assembly.
- the first optical lens assembly is used to receive light from the target object;
- the light splitting assembly is used to split the light propagating through the first optical lens assembly to obtain the first light and the second light, which are sent to the first image sensing assembly
- the first light is propagated, and the second light is propagated to the second image sensing component.
- the sensitivity of the first image sensing assembly is smaller than the sensitivity of the second image sensing assembly.
- the first image sensing component has high resolution and low sensitivity to light, is not easy to overexpose to bright light, can identify high-brightness light, and can be used for imaging of bright targets;
- the second image sensing component It has lower resolution and higher sensitivity to light, can identify low-brightness light, and can be used for imaging of dim targets.
- the camera module can image both dim targets and bright targets, thereby helping to improve the dynamic range of the camera module.
- the camera module may further include a first polarizer located between the light splitting component and the first image sensing component.
- the first light rays parallel to the polarization direction of the first polarizer can pass through, and the intensity of the first light rays entering the first image sensing component can be further weakened, thereby further weakening the transmission of the first image.
- Overexposure of the sensor component to bright light helps to further improve the dynamic range of the camera module.
- the polarization direction of the first polarizer is perpendicular to the main polarization direction of the glare propagating through the first optical lens assembly.
- the polarization direction of the first polarizer By introducing a first polarizer in the optical path of the high-resolution first image sensing component, and designing the polarization direction of the first polarizer to be perpendicular to the main polarization direction of the received glare, it is possible to eliminate or weaken the first polarizer entering the high-resolution first image sensor.
- the glare of an image sensing component so that the camera module can obtain a glare-free and high dynamic range image.
- the ratio of the intensity of the first light ray to the second light ray is less than 1. In this way, the intensity of the first light entering the high-resolution first image sensing component can be further reduced, and the intensity of the second light entering the low-resolution second image sensing component can be increased, thereby helping Further improve the dynamic range of the camera module.
- the first image sensing component is used for photoelectric conversion of the received first light to obtain information of the first image;
- the second image sensing component is used for the received second light Photoelectric conversion is performed to obtain information of the second image; the information of the first image and the information of the second image are used to form an image of the target object.
- the camera module further includes a first processing component, the first processing component can receive the information of the first image from the first image sensing component and receive the second image from the second image sensing component information, and generate an image of the target object according to the information of the first image and the information of the second image.
- the first processing component can receive the information of the first image from the first image sensing component and receive the second image from the second image sensing component information, and generate an image of the target object according to the information of the first image and the information of the second image.
- the first processing component is configured to upsample the information of the second image to obtain the information of the third image, and the resolution of the third image corresponding to the information of the third image is the first image corresponding to the information of the first image.
- the resolutions of the images are the same; the information of the first image and the information of the third image are fused to obtain the image of the target object.
- the obtained image of the target object has higher quality.
- the present application provides a terminal device, and the terminal device can be any camera module in the first aspect or the first aspect.
- the terminal device may be, for example, a smartphone, a vehicle, a smart home device, a smart manufacturing device, a robot, a drone, a surveying and mapping device, or an intelligent transportation device.
- the present application provides an imaging method, the imaging method can be applied to a camera module, the camera module can include a first image sensing component and a second image sensing component, the first image sensing component and the first image sensing component
- the photosensitive areas of the two image sensing assemblies are the same, and the resolution of the first image sensing assembly is greater than the resolution of the second image sensing assembly.
- the method includes receiving light from a target object; splitting the light from the target object to obtain a first light and a second light, and propagating the first light to the first image sensing component, and propagating the second light to the second image sensing component light.
- the camera module further includes a first polarizer located between the light splitting component and the first image sensing component.
- the polarization direction of the first polarizer is perpendicular to the main polarization direction of the glare light propagating through the first optical lens assembly.
- the ratio of the intensity of the first light ray to the second light ray is less than 1.
- photoelectric conversion is performed on the first light to obtain the information of the first image
- photoelectric conversion is performed on the second light to obtain the information of the second image
- according to the information of the first image and the information of the second image information to generate an image of the target object.
- the information of the second image can be upsampled to obtain the information of the third image, and the information of the first image and the information of the third image can be fused to obtain the image of the target object, wherein the information of the third image is The corresponding third image has the same resolution as the first image corresponding to the information of the first image.
- the camera module applicable to the third aspect may be the first aspect or any camera module in the first aspect.
- the present application provides a camera module including a second optical lens assembly, a third optical lens assembly, a first image sensing assembly and a second image sensing assembly, the first image sensing assembly and the second image sensing assembly
- the photosensitive areas of the sensing assemblies are the same, and the resolution of the first image sensing assembly is greater than the resolution of the second image sensing assembly;
- the second optical lens assembly is used to receive light from the target object and transmit light to the first image sensing assembly. Propagating the third light; the third optical lens assembly is used for receiving the light from the target object, and propagating the fourth light to the second image sensing assembly.
- the camera module can realize dual-channel imaging, and the second optical lens assembly can transmit the third light from the target object to the first image sensing assembly,
- the third optical lens assembly can transmit the fourth light from the target object to the second image sensing assembly. Since the sensitivity and resolution are trade-offs when the photosensitive area of the image sensing assembly is fixed, the sensitivity of the first image sensing assembly is smaller than the sensitivity of the second image sensing assembly.
- the first image sensing component has high resolution and low sensitivity to light, is not easily overexposed to bright light, can identify high-brightness light, and can be used for imaging of bright targets;
- the sensor component has lower resolution and higher sensitivity to light, can recognize low-brightness light, and can be used for imaging of dim targets.
- the camera module can image both dim targets and bright targets, thereby helping to improve the dynamic range of the camera module.
- the camera module further includes a second polarizer located between the second optical lens assembly and the first image sensing assembly.
- the third light rays parallel to the polarization direction of the second polarizer can pass through, and the intensity of the third light rays entering the first image sensing component can be further weakened, thereby further weakening the transmission of the first image.
- Overexposure of the sensor component to bright light helps to further improve the dynamic range of the camera module.
- the polarization direction of the second polarizer is perpendicular to the main polarization direction of the glare light propagating through the second optical lens assembly.
- the polarization direction of the second polarizer By introducing a second polarizer in the optical path of the high-resolution first image sensing component, and designing the polarization direction of the second polarizer to be perpendicular to the main polarization direction of the received glare, it is possible to eliminate or weaken the entry into the high-resolution first image sensor.
- the glare of an image sensing component so that the camera module can obtain a glare-free and high dynamic range image.
- the aperture number of the second optical lens assembly is greater than the aperture number of the third optical lens assembly.
- the amount of light passing through different channels can be controlled by the aperture of the corresponding optical lens assembly.
- the intensity of the third light of an image sensing element is compared with the intensity of the fourth light entering the second image sensing element with low resolution, that is, the second optical lens element with a large aperture is matched with the first image with high resolution
- the third optical lens assembly with a small aperture number cooperates with the second image sensing assembly with low resolution, thereby helping to further improve the dynamic range of the camera module.
- the second optical lens assembly and the third optical lens assembly have the same focal length.
- the third light passing through the second optical lens assembly and the fourth light passing through the third optical lens assembly can be converged on the same plane, so that the first image sensing assembly and the second image sensing assembly can be arranged in the same plane on the substrate, which helps to simplify the assembly of the camera module.
- the first image sensing component is used to perform photoelectric conversion on the received third light to obtain information of the fourth image; the second image sensing component is used for the received fourth light Photoelectric conversion is performed to obtain information of the fifth image; wherein, the information of the fourth image and the information of the fifth image are used to form an image of the target object.
- the camera module further includes a second processing component for receiving information of the fourth image from the first image sensing component and receiving information from the fifth image from the second image sensing component, and According to the information of the fourth image and the information of the fifth image, an image of the target object is obtained.
- the second processing component is configured to upsample the information of the fifth image to obtain the information of the sixth image, and the resolution of the sixth image corresponding to the information of the sixth image is the fourth image corresponding to the information of the fourth image The resolution is the same, and the information of the fourth image and the information of the sixth image are fused to obtain the image of the target object.
- the present application provides a terminal device, and the terminal device can be any camera module in the fourth aspect or the fourth aspect.
- the terminal device may be, for example, a smartphone, a vehicle, a smart home device, a smart manufacturing device, a robot, a drone, a surveying and mapping device, or an intelligent transportation device.
- the present application provides an imaging method, which can be applied to a camera module, and the camera module can include a first image sensing assembly and a second image sensing assembly, the first image sensing assembly and the second image sensing assembly.
- the photosensitive areas of the image sensing assemblies are the same, and the resolution of the first image sensing assembly is greater than the resolution of the second image sensing assembly.
- the method includes receiving light from a target object, and propagating a third light to a first image sensing assembly and a fourth light to a second image sensing assembly.
- the camera module further includes a second polarizer located between the second optical lens assembly and the first image sensing assembly.
- the polarization direction of the second polarizer is perpendicular to the main polarization direction of the glare light propagating through the second optical lens assembly.
- the aperture number of the second optical lens assembly is greater than the aperture number of the third optical lens assembly.
- the second optical lens assembly and the third optical lens assembly have the same focal length.
- photoelectric conversion may be performed on the received third light to obtain information of the fourth image; photoelectric conversion may be performed on the received fourth light to obtain information of the fifth image; The information of the image and the information of the fifth image are used to obtain the image of the target object.
- the information of the fifth image is up-sampled to obtain the information of the sixth image
- the information of the fourth image and the information of the sixth image are fused to obtain the image of the target object, wherein the information of the sixth image corresponds to the sixth image.
- the resolution of the image is the same as the resolution of the fourth image corresponding to the information of the fourth image.
- the camera module applied in the sixth aspect may be any one of the fourth aspect or the fourth aspect.
- the present application provides a computer-readable storage medium in which a computer program or instruction is stored, and when the computer program or instruction is executed by the camera module, the camera module is made to perform the above-mentioned third aspect Or the method in any possible implementation manner of the third aspect, or the camera module is made to execute the above sixth aspect or the method in any possible implementation manner of the sixth aspect.
- 1a is a schematic diagram of the relationship between the size and resolution of a pixel provided by the application;
- 1b is a schematic diagram of the relationship between the size and resolution of another pixel provided by the application.
- Fig. 1c is the schematic diagram of the polarization state of a kind of natural light provided by the application.
- 1d is a schematic diagram of the polarization state of a linearly polarized light provided by the application;
- 1e is a schematic diagram of the polarization state of another linearly polarized light provided by the application.
- Figure 1f is a schematic diagram of the polarization state of a partially polarized light provided by the application.
- FIG. 1g is a schematic diagram of the principle of a kind of image fusion provided by the application.
- Fig. 2 is the principle schematic diagram of a kind of polarizer anti-glare provided by the application;
- 3a is a schematic diagram of a possible application scenario of a camera module provided by the application.
- 3b is a schematic diagram of another possible application scenario of the camera module provided by the application.
- FIG. 4 is a schematic structural diagram of a camera module provided by the application.
- 5a is a schematic structural diagram of a first optical lens assembly provided by the application.
- 5b is a schematic structural diagram of another first optical lens assembly provided by the application.
- FIG. 6 is a schematic diagram of the spectroscopic principle of a spectroscopic component provided by the present application.
- FIG. 7 is a schematic diagram of the relationship between a first image sensing assembly and a second image sensing assembly provided by the application;
- FIG. 8 is a schematic diagram of a process of fusion of a first image and a second image provided by the present application
- FIG. 9 is a schematic structural diagram of another camera module provided by the application.
- FIG. 10 is a schematic structural diagram of another camera module provided by the application.
- FIG. 11 is a schematic structural diagram of another camera module provided by the application.
- FIG. 12 is a schematic structural diagram of a terminal device provided by the application.
- FIG. 13 is a schematic flowchart of an imaging method provided by the application.
- FIG. 14 is a schematic flowchart of another imaging method provided by the present application.
- Glare refers to the visual conditions that cause visual discomfort and reduce the visibility of objects due to the extreme brightness contrast in space or time due to unsuitable brightness distribution in the field of view.
- Excessive brightness in a certain part of the field of view or excessive brightness changes before and after.
- the road that has been sprinkled by rain is a good mirror surface, and strong light sources (such as the sun) are reflected by the mirror and enter the vehicle camera, which will cause strong glare;
- the refractive index is lower as it is closer to the road surface, and the light incident on the road surface will also bend.
- the road surface is equivalent to a good mirror surface, and the strong light source is reflected by the mirror surface and enters the vehicle camera to generate glare.
- the dynamic range is an important parameter of the camera module, which refers to the interval included in the brightness values of the brightest and darkest objects that can normally display details in the same picture captured by the camera module.
- the greater the dynamic range the greater the degree to which objects that are too bright or too dark can be displayed properly in the same frame.
- Upsampling can also be called image upscaling or image interpolation, and the main purpose is to upscale the image so that a higher resolution image can be obtained.
- Image enlargement usually adopts the interpolation method, that is, on the basis of the original image pixels, a suitable interpolation algorithm is used to insert new elements between the pixels.
- the interpolation algorithm may be traditional interpolation, edge image-based interpolation, or region-based image interpolation, etc., which is not limited in this application.
- a pixel may refer to the smallest unit that constitutes an imaging area of an image sensor.
- the size of the pixel refers to the physical size of the pixel, that is, the distance between the centers of adjacent pixels.
- the resolution and the size of the pixel are trade-offs.
- FIG. 1a and Figure 1b the relationship between pixel size and resolution under the same photosensitive area.
- the size of the pixel in FIG. 1a is a and the resolution is 4 ⁇ 4; the size of the pixel in FIG. 1b is a/2 and the resolution is 8 ⁇ 8. It can be determined from Fig. 1a and Fig. 1b that the smaller the size of the pixel, the higher the resolution; the larger the size of the pixel, the lower the resolution.
- Minimum illumination refers to the sensitivity of the image sensor to ambient light, or the darkest light required by the image sensor for normal imaging.
- the aperture can be used to control the amount of light entering the optical lens, that is, the aperture is used to determine the amount of light entering the optical lens.
- the size of the aperture is usually expressed by the F number, denoted as F/, and the F number is also called the aperture number.
- F/ the F number
- a large aperture optical lens has a small F-number, that is, a small aperture; an optical lens with a small aperture has a large F-number, that is, a large aperture.
- Polarizers also known as polarizers, are a type of optical filter. Polarizers are used to absorb or reflect light in one polarization direction and transmit light in another orthogonal polarization direction. The transmittance of light is directly related to its polarization state. Polarizers are generally classified into absorbing polarizers and reflective polarizers (RP). Absorptive polarizers strongly absorb one of the orthogonally polarized components of incident linearly polarized light, while the other component is less absorbed. Reflective polarizers can transmit linearly polarized light in a certain direction and can reflect light whose polarization direction is perpendicular to the transmitted direction.
- the absorbing polarizer may be, for example, a dichroic polarizer, and the reflective polarizer may be, for example, a polarizing beam splitter utilizing birefringence.
- the polarization direction is also referred to as the polarization direction or the polarization direction. This is because there is a certain characteristic direction in the polarizer, called the polarization direction.
- the polarizer only allows light parallel to the polarization direction to pass through, while absorbing or reflecting light perpendicular to the direction.
- Image fusion is an image processing technology, which refers to the image data collected by multiple source channels about the same target through image processing and specific algorithm calculation, etc., to maximize the extraction of favorable information in each channel, and finally to synthesize high quality. (eg brightness, sharpness, color), the fused image has a higher resolution than the original image.
- FIG. 1g is a schematic diagram of the principle of image fusion provided by the present application. Since image fusion can utilize the spatial-temporal correlation and information complementarity of two (or multiple) images, the fused new image has a more comprehensive and clear description of the scene, which is more beneficial to the detection device identification and detection. It should be noted that, image fusion usually needs to ensure that each image to be fused has been registered and the pixel bit width is consistent.
- FIG. 2 a schematic diagram of the principle of anti-glare of a polarizer provided by the present application.
- the incident angle ⁇ is the Brewster angle
- the reflected light is linearly polarized light
- the transmitted light is approximately natural light.
- the incident angle ⁇ is a non-Brewster angle incident
- both the reflected light and the transmitted light are partially polarized light.
- Brewster's angle arctan(n2/n1), wherein, n1 represents the refractive index of the medium where the incident light is located, and n2 represents the refractive index of the medium where the refracted light is located.
- the glare reflected by the road surface is partially polarized light or linearly polarized light, and the reflected glare can be weakened or eliminated by adjusting the polarization direction of the polarizer. It should be understood that when the glare is partially polarized light, the angle between the main polarization direction (or called the long axis of polarization) of the glare and the road surface can be determined.
- the camera module can be installed on a vehicle (eg, an unmanned vehicle, a smart vehicle, an electric vehicle, a digital vehicle, etc.) as a vehicle-mounted camera, as shown in FIG. 3a.
- vehicle-mounted camera can obtain measurement information such as the distance of surrounding objects in real time or periodically, so as to provide necessary information for operations such as lane correction, vehicle distance maintenance, and reversing.
- the vehicle camera can realize: a) target recognition and classification, such as various lane line recognition, traffic light recognition and traffic sign recognition, etc.; area), mainly for vehicles, ordinary road edges, side stone edges, boundaries without visible obstacles, unknown boundaries, etc.; c) The ability to detect laterally moving targets, such as pedestrians and vehicles crossing the intersection d) Localization and map creation, such as localization and map creation based on simultaneous visual localization and mapping (SLAM) technology. Therefore, in-vehicle cameras have been widely used in unmanned, autonomous, assisted driving, intelligent driving, connected vehicles, security monitoring, surveying and mapping and other fields. It should be understood that the camera module can be combined with an advanced driving assistant system (ADAS).
- ADAS advanced driving assistant system
- the camera module can also be applied in various other scenarios, and is not limited to the scenarios exemplified above.
- the camera module can also be applied to terminal equipment or set in components of the terminal equipment. Transport vehicle (automated guided vehicle, AGV) or unmanned transport vehicle, etc.).
- the camera module can also be installed on the drone as an onboard camera.
- the camera module can also be installed on roadside traffic equipment (such as a roadside unit (RSU)) as a roadside traffic camera, as shown in Figure 3b, so as to realize intelligent vehicle-road collaboration.
- RSU roadside unit
- the dynamic range is a more important functional parameter of the camera module.
- the dynamic range of the camera module can only reach 80-120 decibels (dB), but the dynamic range of the target in nature can reach 180dB. Therefore, the dynamic range of the camera module needs to be improved.
- the dynamic range of the camera module is mainly improved by optimizing the pixels included in the image sensor in the camera module.
- the improvement of the dynamic range of the camera module is also limited. Therefore, how to obtain high-quality images with a wide dynamic range is an urgent technical problem to be solved by the camera module.
- the present application provides a camera module, which can obtain a wider dynamic range.
- the camera module proposed in the present application will be described in detail below with reference to FIG. 4 to FIG. 11 .
- the camera module may include a first optical lens assembly 401 , a light splitting assembly 402 , a first image sensing assembly 403 and a second image sensing assembly 404 .
- the resolution of the first image sensing assembly 403 is greater than that of the second image sensing assembly 404;
- the first optical lens assembly 401 is used to receive the light from the target object;
- the light splitting assembly 402 is used to The light transmitted by the lens assembly 401 is split to obtain the first light and the second light, and the first light is transmitted to the first image sensing element 403 and the second light is transmitted to the second image sensing element 404 .
- the sensitivity of the first image sensing assembly is smaller than the sensitivity of the second image sensing assembly. That is to say, the first image sensing component has high resolution and low sensitivity to light, is not easily overexposed to bright light, can identify high-brightness light, and can be used for imaging of bright targets; The sensor component has lower resolution and higher sensitivity to light, can recognize low-brightness light, and can be used for imaging of dim targets. In this way, the camera module can image both dim targets and bright targets, thereby helping to improve the dynamic range of the camera module.
- the camera module is equivalent to the second image sensing component sacrificing the resolution of a part of the image in exchange for an increase in sensitivity, so as to improve the dynamic range of the camera module.
- the information carried by the light propagating through the first optical lens assembly is the same as the information carried by the light entering the first optical lens assembly (ie, the light from the target object).
- the target object includes but is not limited to a single object.
- the target object when photographing a person, the target object includes the person and the scene around the person, that is, the scene around the person is also a part of the target object. It can also be understood that all objects within the field of view of the first optical lens assembly can be referred to as target objects.
- the first optical lens assembly can receive light from the target object, and by changing the propagation direction of the light from the target object, the light from the target object can enter the camera module as much as possible. Further, optionally, the first optical lens assembly also propagates glare (such as the glare reflected from the road surface) into the camera module.
- glare such as the glare reflected from the road surface
- the first optical lens assembly may be composed of at least one optical lens.
- FIG. 5a shows a schematic structural diagram of a first optical lens assembly.
- the first optical lens assembly includes seven optical lenses as an example.
- the light from the target object can be transmitted into the camera module through the first optical lens assembly as much as possible, and further, external glare can also be transmitted into the camera module through the first optical lens assembly.
- FIG. 5b shows a schematic structural diagram of another first optical lens assembly.
- the first optical lens assembly may include six optical lenses.
- the structure of the first optical lens assembly shown in FIG. 5a or 5b above is only an example, and the first optical lens assembly in the present application may have more or less optical lenses than those shown in FIG. Figure 5b More or fewer optical lenses.
- the optical lens can be any one of a convex lens (such as a biconvex lens, a plano-convex lens, or a convex-concave lens) or a concave lens (such as a biconcave lens, a plano-concave lens, or a meniscus lens), or a combination of a convex lens and a concave lens, which is not covered in this application. Do limit.
- the material of at least one optical lens in the first optical lens assembly is glass.
- the optical lenses in the first optical lens assembly can be cut in the height direction of the camera module (see FIG. 5a above).
- the light splitting component may be configured to perform light splitting (eg, split into two) light transmitted by the first optical lens component to obtain the first light ray and the second light ray.
- the light splitting component may divide the intensity (or called partial energy or called partial amplitude) of the light transmitted from the first optical lens component to obtain the first light ray and the second light ray.
- the information carried by the first light and the second light is the same, the information carried by the first light is the same as the information carried by the light transmitted by the first optical lens assembly, and the information carried by the second light is carried by the first optical lens assembly.
- the information carried by the light is the same.
- the sum of the intensity of the first light and the intensity of the second light is equal to or approximately equal to the intensity of the light transmitted by the first optical lens assembly.
- the ratio of the intensities of the first light and the second light may be less than 1.
- the ratio of the intensity of the first light and the second light may be 2:8; for another example, the ratio of the intensity of the first light to the second light is 1:9; for another example, the intensity of the first light and the second light
- the intensity ratio is 4:6. It can also be understood that the intensity of the first light received by the first image sensing assembly is smaller than the intensity of the second light received by the second image sensing assembly. In this way, the intensity of the first light received by the first image sensing component with high resolution and low sensitivity is smaller than the intensity of the second light received by the second image sensing component with low resolution and high sensitivity.
- the dynamic range of the camera module It should be understood that the ratio of the intensities of the first light and the second light may also be equal to 1, or greater than 1.
- the ratio of the intensities of the first light and the second light can be designed according to actual requirements, which is not limited in this application.
- the light-splitting component may be, for example, a beam splitter (beam splitter, BS) or a light-splitting plate.
- the beam-splitting prism is formed by coating one or more thin films (ie, beam-splitting film) on the surface of the prism
- the beam-splitting plate is formed by coating one or more layers of thin films (ie, beam-splitting film) on one surface of the glass plate.
- the light splitting prism and the light splitting plate both use the films to have different transmittances and reflectances of the incident light, so as to realize the light splitting of the light transmitted by the first optical lens assembly.
- the beam splitting prism can divide the light transmitted by the first optical lens assembly into two to obtain the first light and the second light. It can also be understood that after the light propagating from the first optical lens assembly passes through the light splitting assembly, part of it is transmitted (the first light) to the first image sensor assembly, and the other part is reflected (the second light) to the second image sensor. components. Exemplarily, the ratio of the intensities of the first light and the second light may be determined by the reflectance and transmittance of the plated beam splitting film.
- the first image sensing component may perform photoelectric conversion on the received first light to obtain information of the first image; the second image sensing component may perform photoelectric conversion on the second light to obtain information of the second image.
- the first image sensing assembly and the second image sensing assembly use non-identical architectures. Specifically, the photosensitive areas of the first image sensing assembly and the second image sensing assembly are the same, and the resolution of the first image sensing assembly is greater than the resolution of the second image sensing assembly. Further, optionally, the first image sensing component includes a first pixel, the second image sensing component includes a second pixel, and the size of the first pixel is smaller than the size of the second pixel.
- the sensitivity of the image sensing assembly is lower than that of the second image sensing assembly The sensitivity of the image sensing component to light.
- FIG. 7 a schematic diagram of the relationship between a first image sensing assembly and a second image sensing assembly provided by the present application.
- (a) in FIG. 7 represents the first image sensing assembly
- (b) in FIG. 7 represents the second image sensing assembly.
- the minimum repeatable units of the first image sensing assembly and the second image sensing assembly are both RGGB, R represents a pixel for receiving red (red) light, G represents a pixel for receiving green (green) light, and B represents a pixel for receiving blue (blue) light.
- the photosensitive areas of the first image sensing assembly and the second image sensing assembly are the same, and the size of the first pixel included in the first image sensing assembly is twice the size of the second pixel included in the second image sensing assembly. Therefore, , the resolution of the first image sensing assembly is 4 times that of the second image sensing assembly. Also, the sensitivity of the first image sensing assembly is 1/4 of the sensitivity of the second image sensing assembly. That is to say, the first image sensing component has the characteristics of high resolution, small pixels and low sensitivity, and can be used for imaging of bright targets; the second image sensing component has the characteristics of low resolution, large pixels and high sensitivity, and can be used for imaging of dim targets .
- the resolution and the minimum repeatable unit of the first image sensing assembly and the second image sensing assembly shown in FIG. 7 are examples, and do not limit the present application.
- the smallest repeatable unit of the first image sensing assembly and the second image sensing assembly may also be RYYB.
- the resolution of the first image sensing component is 8M
- the resolution of the second image sensing component is 2M as an example
- the ratio of the intensity of the first light to the second light is 2:8 as an example
- Table 1 exemplarily shows the relationship between the improved dynamic range of the camera module based on the first embodiment and the dynamic range of the camera module based on the prior art.
- Table 1 The relationship between the dynamic range of the camera module based on the first embodiment and the dynamic range of the camera module based on the prior art
- the field of view of the first optical lens assembly may include a bright target or a dim target, and the bright target and the dim target are collectively referred to as a target object.
- the energy of the incident light received by the image sensor from the bright target is Eh
- the energy of the incident light from the dim target is E1
- the dynamic range is -10log. (Eh/El).
- the first image sensing component receives the incident light from the bright target The energy of 0.2 ⁇ Eh.
- the intensity of the first light entering the first image sensing assembly is reduced by 0.2 times, and therefore, the overexposure prevention capability of the first image sensing assembly for the bright target is improved by two times.
- the dynamic range of the camera module based on the first embodiment has been effectively improved.
- the energy of the incident light received by the second image sensing component from the bright target is 0.8 ⁇ Eh, which may be overexposed, and the energy of the incident light received by the first image sensing component from the dim target is 0.2 ⁇ El , may be too dark, and a high dynamic range can be achieved through the fusion of image information by the first processing component later.
- the subsequent related descriptions which will not be repeated here.
- the first image sensing component may be a complementary metal-oxide semiconductor (CMOS) phototransistor, a charge-coupled device (CCD), and a photodetector (photon detector, PD), or high-speed photodiode.
- CMOS phototransistor is a device or chip that converts optical signals into electrical signals based on complementary metal oxide semiconductor technology.
- the second image sensing component may also be a complementary metal-oxide semiconductor (CMOS) phototransistor, a charge-coupled device (CCD), a photon detector (PD), or High-speed photodiode.
- CMOS complementary metal-oxide semiconductor
- CCD charge-coupled device
- PD photon detector
- the type of the first image sensing element may be the same as the type of the second image sensing element, for example, the first image sensing element and the second image sensing element may both be CMOS phototransistors.
- the type of the first image sensing element may be different from that of the second image sensing element, for example, the first image sensing element is a CMOS phototransistor, and the second image sensing element is a CCD.
- the first image sensing component may be a first image sensor
- the second image sensing component may be a second image sensor.
- the resolution range of the first image sensor may be [8 million pixels, 48 million pixels]
- the resolution range of the second image sensor may be [8 million pixels, 48 million pixels].
- the resolution is greater than the resolution of the second image sensor.
- the resolution of the first image sensor may be 12 million pixels, 20 million pixels, or 48 million pixels
- the resolution of the second image sensor may be 8 million pixels.
- the resolution of the first image sensor may also be greater than 48 million pixels, for example, 52 million pixels, 60 million pixels, 72 million pixels, etc.; the resolution of the second image sensor may also be greater than 8 million pixels. It should be understood that any combination that can satisfy that the resolution of the first image sensing assembly is greater than that of the second image sensing assembly is acceptable.
- the camera module in the first embodiment may further include a first polarizer. Further, optionally, the camera module may further include a first processing component. The first polarizer and the first processing component are respectively introduced below.
- the first polarizer may be located between the light splitting component and the first image sensing component (see Figure 9 below).
- the first polarizer is used for allowing the light of the first light whose polarization state is parallel to the polarization direction of the first polarizer to pass through, and the first light passing through the first polarizer is converged on the first image sensing component.
- the intensity of the first light entering the first image sensing element can be further reduced, thereby further reducing the overexposure of the first image sensing element to bright light, thereby further improving the dynamic range of the camera module.
- the first light rays passing through the first polarizer are part of the first rays entering the first polarizer (ie, the part of the first rays parallel to the polarization direction of the first polarizer).
- the polarization direction of the first polarizer is perpendicular to the main polarization direction of the glare propagating through the first optical lens assembly.
- the resolution of the first image sensing component is 8M
- the resolution of the second image sensing component is 2M as an example
- the ratio of the intensity of the first light to the second light is 2:8 as an example .
- the camera module further includes a first polarizer
- Table 2 The improved dynamic range of the camera module based on the first embodiment and the dynamic range of the camera module based on the prior art
- the camera module further includes a first polarizer, in combination with the above Table 2, similar to the analysis in Table 1 above, the overexposure prevention capability of the first image sensing component for high-brightness targets is improved by 10 times, and the second image sensor The low-light capability of the component for dim targets is increased by 3.2 times, and the dynamic range of the camera module is 10log32.
- improving the dynamic range of the camera module is related to the ratio of the intensities of the first light and the second light.
- the camera module further includes the first polarizer, please refer to Table 3 for the relationship between the intensity ratio ⁇ of the first light and the second light and the effect of improving the dynamic range.
- the first polarizer can not only eliminate or reduce glare, but also soften the first light received by the first image sensing component.
- the first processing component may receive information of the first image from the first image sensing component and receive information of the second image from the second image sensing component, according to the information of the first image information and information of the second image to generate an image of the target object.
- the first processing component may perform up-sampling on the information of the second image (for details, please refer to the foregoing related description) to obtain the information of the third image, and the information of the third image corresponding to the resolution of the third image is the same as that of the first image.
- the resolution of the first image corresponding to the information of the image is the same; the information of the first image and the information of the third image are fused to obtain the image of the target object, as shown in FIG. 8 .
- the obtained image of the target object has higher quality.
- the first processing component may further perform processing such as denoising, enhancement, segmentation and blurring on the image of the target obtained by fusion, so as to enrich the user experience.
- the first processing component may be, for example, an application processor (application processor, AP), a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a digital Signal processor (digital signal processor, DSP) and so on.
- application processor application processor, AP
- graphics processor graphics processing unit, GPU
- image signal processor image signal processor, ISP
- digital Signal processor digital signal processor
- the first processing component may be a central processing unit (CPU), other general-purpose processors, application specific integrated circuits (ASICs), field programmable gate arrays (field programmable gate arrays, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof.
- a general-purpose processor may be a microprocessor or any conventional processor.
- the first processing component may be a combination of any of a CPU, ASIC, FPGA, other programmable logic device, transistor logic device and any of an application processor, graphics processor, image signal processor, digital signal processor .
- the camera module of one of the above embodiments may further include an infrared (infrared radiation, IR) filter, and the IR filter may be located between the light splitting component and the first image sensing component, and/or , between the light splitting component and the second image sensing component.
- the IR filter can be used to block or absorb infrared light to prevent damage to the image sensing assembly; furthermore, the IR filter can be configured to have no effect on the focal length of the first optical lens assembly.
- the material of the IR filter may be glass or a glass-like resin, such as blue glass.
- the camera module may include a first optical lens assembly, a light splitting assembly, a first image sensing assembly, a second image sensing assembly, a first polarizer located between the light splitting assembly and the first image sensing assembly, and a first process components.
- 5a as an example for the first optical lens assembly
- a prism as an example for the spectroscopic component
- the (a) in FIG. 7 for the first image sensing component as an example
- first optical lens assembly For possible implementations of the first optical lens assembly, the light splitting assembly, the first image sensing assembly, the second image sensing assembly, the first polarizer, and the first processing assembly, reference may be made to the foregoing related descriptions, which will not be repeated here. Repeat.
- the light from the target object passes through the first optical lens assembly and is transmitted to the light splitting assembly, and is divided into a first light ray and a second light ray by the light splitting assembly, and the first light ray propagates to the first image sensor.
- the second light is transmitted to the second image sensing assembly, the first image sensing assembly converts the first light into a first electrical signal (that is, the information of the first image), and the second image sensing assembly converts the second light Converted into a second electrical signal (that is, the information of the second image), the first electrical signal can be converted into a first digital image signal through an analog to digital converter (A/D), and the second electrical signal can be converted into a first digital image signal.
- A/D analog to digital converter
- Both the first digital image signal and the second digital image signal are transmitted to the ISP for processing (eg fusion) to obtain an image of the target object in a specific format. Further, optionally, the ISP can transmit the image of the target object to the display screen for display.
- the camera module may include a second optical lens assembly 1001, a third optical lens assembly 1002, a first image sensing assembly 1003 and a second image sensing assembly 1004, and the first image sensing assembly 1003 and the second image sensing assembly
- the photosensitive area of 1004 is the same, and the resolution of the first image sensing assembly 1003 is greater than that of the second image sensing assembly 1004;
- the second optical lens assembly 1001 is used to receive the light from the target object, and to the first image sensing
- the component 1003 transmits the third light;
- the third optical lens component 1002 is used to receive the light from the target object and transmit the fourth light to the second image sensing component 1004 .
- the camera module can realize dual-channel imaging, and the second optical lens assembly can transmit the third light from the target object to the first image sensor component, the third optical lens component can transmit the fourth light from the target object to the second image sensing component; since the sensitivity and resolution of the image sensing component are fixed when the photosensitive area is fixed, therefore, the first The sensitivity of one image sensing element is less than the sensitivity of the second image sensing element.
- the first image sensing component has high resolution and low sensitivity to light, is not easily overexposed to bright light, can identify high-brightness light, and can be used for imaging of bright targets;
- the sensor component has lower resolution and higher sensitivity to light, can recognize low-brightness light, and can be used for imaging of dim targets.
- the camera module can image both dim targets and bright targets, thereby helping to improve the dynamic range of the camera module.
- the information carried by the third light ray is the same as the information carried by the light ray from the target object
- the information carried by the fourth light ray is also the same as the information carried by the light ray from the target object.
- the third light ray is a part or all of the light ray from the target object
- the fourth light ray is also a part or all of the light ray from the target object.
- the distance between the first optical lens assembly and the second optical lens assembly is relatively close, so the field of view of the second optical lens assembly is the same as the field of view of the third optical lens assembly, or the parallax can be ignored.
- the light from the target object received by the second optical lens assembly is the same as the light from the target object received by the third optical lens assembly, or the difference is negligible.
- the second optical lens assembly can receive the light from the target object, and by changing the propagation direction of the light from the target object, transmits the third light to the first image sensing assembly;
- the third optical lens The component can receive the light from the target object, and propagate the fourth light to the second image sensing component by changing the propagation direction of the light from the target object.
- both the second optical lens assembly and the third optical lens assembly may transmit glare (eg, glare reflected from the road surface) into the camera module.
- the f-number (i.e., the F-number) of the second optical lens assembly is greater than the f-number of the third optical lens assembly. It can also be understood that the intensity of the third light ray transmitted from the target object through the second optical lens assembly is smaller than the intensity of the fourth light ray transmitted through the third optical lens assembly. In other words, the intensity of the third light ray is smaller than that of the fourth light ray. It should be understood that a small aperture number and a large amount of light transmission are conducive to imaging of dim and weak light; a large number of apertures and a small amount of light transmission are conducive to imaging of bright light.
- the ratio of the intensities of the third light and the fourth light may be changed through the aperture number of the second optical lens assembly and the aperture number of the third optical lens assembly.
- the ratio of the intensities of the third light and the fourth light may be 2:8; for another example, the ratio of the intensities of the third light to the fourth light is 1:9; for another example, the intensities of the third light and the fourth light The ratio is 4:6. It can also be understood that the intensity of the third light received by the first image sensing assembly is smaller than the intensity of the fourth light received by the second image sensing assembly.
- the second optical lens assembly and the third optical lens assembly may both be optical lens assemblies with a fixed aperture, wherein the second optical lens assembly is a small aperture optical lens assembly, and the third optical lens assembly is Large aperture optical lens assembly.
- both the second optical lens assembly and the third optical lens assembly are optical lens assemblies with adjustable aperture, and the adjustable aperture number of the second optical lens assembly is greater than the aperture number of the third optical lens assembly .
- the camera module can realize dual-channel imaging, and the amount of light passing through different channels can be controlled by the aperture of the corresponding optical lens assembly.
- the second optical lens assembly with a large aperture number cooperates with the high-resolution first image sensing assembly
- the third optical lens assembly with a small aperture number cooperates with the low-resolution second image sensing assembly, thereby helping to improve the camera module dynamic range.
- the second optical lens assembly and the third optical lens assembly have the same focal length.
- the third light passing through the second optical lens assembly and the fourth light passing through the third optical lens assembly can be converged on the same plane, so that the first image sensing assembly and the second image sensing assembly can be arranged in the same plane on the substrate (see Figure 11 below), which helps simplify the assembly of the camera module.
- the camera module of the second embodiment can be applied to a fixed-focus imaging scene.
- the image plane circles of the second optical lens assembly and the third optical lens assembly are the same, so that the effective areas used by the first image sensing assembly and the second image sensing assembly for imaging are consistent of.
- the structures of the second optical lens assembly and the third optical lens assembly reference may be made to the description of the aforementioned first optical lens assembly, which will not be repeated here.
- the structures of the second optical lens assembly and the third optical lens assembly may be the same or different, which are not limited in this application.
- the optical lenses in the second optical lens assembly and/or the third optical lens assembly may be cut in the height direction of the camera module.
- the first image sensing component is used to perform photoelectric conversion on the received third light to obtain information of the fourth image; the second image sensing component is used for the received fourth light Photoelectric conversion is performed to obtain information of the fifth image; wherein, the information of the fourth image and the information of the fifth image are used to form an image of the target object.
- the first ray can be replaced by the third ray
- the second ray can be replaced by the fourth ray
- the information of the first image can be replaced by the information of the fourth image
- the information of the second image can be replaced by the information of the fifth image
- the information of the third image is replaced with the information of the sixth image, etc.
- the camera module in the second embodiment may further include a second polarizer. Further, optionally, the camera module may further include a second processing component. The second polarizer and the second processing component are respectively introduced below.
- the second polarizer is located between the second optical lens assembly and the first image sensing assembly (see FIG. 11 below).
- the second polarizing plate is used for allowing the light parallel to the polarization direction of the second polarizing plate to pass through, and the third light passing through the second polarizing plate is converged on the first image sensing component.
- the second polarizer allows the third light rays that are parallel to the polarization direction of the second polarizer to pass through, and the rest are filtered out. In this way, the intensity of the third light entering the first image sensing element can be further reduced, thereby further reducing the overexposure of the first image sensing element to bright light, thereby further improving the dynamic range of the camera module.
- the third light rays passing through the second polarizer are a part of the third rays entering the second polarizer (ie, the part of the third rays parallel to the polarization direction of the second polarizer).
- the polarization direction of the second polarizer is perpendicular to the main polarization direction of the glare transmitted through the second optical lens assembly, and the second polarizer is introduced into the optical path of the high-resolution first image sensing assembly,
- the polarization direction of the second polarizer is designed to be perpendicular to the main polarization direction of the received glare, which can eliminate or reduce the glare entering the high-resolution first image sensing component (the specific principle can be combined with the polarization state of the glare and the aforementioned Figure 2). description), so that a glare-free and high dynamic range image can be obtained.
- the resolution of the first image sensing component is 8M
- the resolution of the second image sensing component is 2M as an example
- the ratio of the intensity of the first light to the second light is 2:8 as an example
- Table 4 exemplarily shows the relationship between the dynamic range of the camera module based on the second embodiment and the dynamic range of the camera module based on the prior art.
- Table 4 is based on the dynamic range relationship between the camera module of the second embodiment and the camera module based on the prior art
- the camera module further includes a second polarizer, in combination with the above Table 4, similar to the analysis in Table 1 above, the overexposure prevention capability of the first image sensing component for high-brightness targets is improved by 2 times, and the second image sensor The low-light capability of the component for dim targets is increased by 4 times, and the dynamic range of the camera module is 10log8.
- the relationship between the ratio of the intensities of the third light rays and the fourth light rays and the improvement effect of the dynamic range can be found in Table 3 above, which will not be repeated here.
- the second processing component may receive information of the fourth image from the first image sensing component, and receive information of the fifth image from the second image sensing component, and according to the fourth image and the information of the fifth image to obtain the image of the target object.
- the second processing component is configured to upsample the information of the fifth image to obtain the information of the sixth image, and the resolution of the sixth image corresponding to the information of the sixth image is the fourth image corresponding to the information of the fourth image The resolution is the same; the information of the fourth image and the information of the sixth image are fused to obtain the image of the target object.
- the camera module of the second embodiment may further include an infrared (infrared radiation, IR) filter, and the IR filter may be located between the second optical lens assembly and the first image sensing assembly, and /or, between the third optical lens assembly and the second image sensing assembly.
- IR infrared radiation
- FIG. 11 it is a schematic structural diagram of another camera module provided by the present application.
- the camera module may include a second optical lens assembly, a third optical lens assembly, a first image sensing assembly, a second image sensing assembly, a second optical lens assembly between the second optical lens assembly and the first image sensing assembly a polarizer and a second processing component.
- the first image sensing component is taken as an example in (a) in the above-mentioned FIG. 7
- the second image sensing component is taken as an example in the above-mentioned (b) in FIG. 7 .
- the second optical lens assembly and the third optical lens assembly can be represented by a single lens, and for the specific structure, please refer to the foregoing description of the first optical lens assembly.
- the second optical lens assembly the third optical lens assembly, the first image sensing assembly, the second image sensing assembly, the second polarizer, and the second processing assembly, please refer to the foregoing related descriptions respectively, here It will not be repeated.
- the light from the target object passes through the second optical lens assembly to obtain the third light, and the third light is propagated to the first image sensing assembly; the light from the target object passes through the third optical lens assembly
- the fourth light is obtained, the fourth light is propagated to the second polarizer, and then propagated to the second image sensing component after passing through the second polarizer.
- the first image sensing component converts the third light into a fourth electrical signal (that is, the information of the fourth image), the second image sensing component converts the fourth light into a fifth electrical signal (the information of the fifth image), and the
- the four electrical signals can be converted into a third digital image signal after A/D, the fifth electrical signal can be converted into a fourth digital image signal after A/D, and both the third digital image signal and the fourth digital image signal are transmitted Go to the ISP for processing (eg fusion) to obtain an image of the target object in a specific format. Further, optionally, the ISP can transmit the image of the target object to the display screen for display.
- the present application can also provide a terminal device.
- the terminal device may include the camera module in the first embodiment above or may also include the camera module in the second embodiment above. Further, optionally, the terminal device may further include a memory and a processor, where the memory is used for storing programs or instructions; the processor is used for calling the programs or instructions to control the above-mentioned camera module to acquire the image of the target object. It can be understood that the terminal device may also include other devices, such as a wireless communication device, a touch screen, a display screen, and the like.
- FIG. 12 it is a schematic structural diagram of a terminal device provided by this application.
- the terminal device 1200 may include a processor 1201, a memory 1202, a camera module 1203, a display screen 1204, and the like.
- the terminal device to which the present application applies may have more or fewer components than the terminal device shown in FIG. 12 , may combine two or more components, or may have different component configurations.
- the various components shown in Figure 12 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
- the processor 1201 may include one or more processing units.
- the processor 1201 may include an application processor 1201 (application processor, AP), a graphics processor 1201 (graphics processing unit, GPU), an image signal processor 1201 (image signal processor, ISP), a controller, a digital signal processor 1201 (digital signal processor, DSP) and so on.
- application processor application processor
- GPU graphics processing unit
- image signal processor image signal processor
- ISP image signal processor
- DSP digital signal processor
- the memory 1202 may be random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), registers, hard disk, removable hard disk, CD-ROM or any other form of storage medium well known in the art.
- An exemplary memory 1202 is coupled to the processor 1201 such that the processor 1201 can read information from, and write information to, the memory 1202 .
- the memory 1202 may also be a component of the processor 1201 .
- the processor 1201 and the memory 1202 may also exist in the terminal device as discrete components.
- the camera module 1203 can be used to capture moving and still images, and the like.
- the terminal device may include one or N camera modules 1203, where N is a positive integer.
- N is a positive integer.
- the camera module 1203 When the camera module 1203 is applied as a vehicle camera module, based on the functions of the vehicle camera module, it can be divided into a driving assistance camera module, a parking assistance camera module and an in-vehicle driver monitoring camera module.
- Driving assistance camera modules are used for driving records, lane departure warning, door opening warning, blind spot monitoring and traffic sign recognition.
- Driving assistance camera modules include intelligent forward vision (such as monocular/binocular/trinocular), which can be used for dynamic object detection (vehicles, pedestrians), static object detection (traffic lights, traffic signs, lane lines, etc.) and passable Space division, etc.; side view assistance (such as wide-angle), used to monitor dynamic targets in the blind area of the rearview mirror during driving; night vision assistance (such as night vision camera), which can be used at night or in other situations with poor light. to achieve the detection of target objects.
- the parking assist camera module can be used for reversing image/360° surround view, 360° surround view (such as wide-angle/fisheye), mainly for low-speed close-up perception, and can form a seamless 360-degree panoramic top view around the vehicle.
- the in-vehicle driver monitoring camera module mainly provides one or more layers of early warning for dangerous situations such as driver fatigue, distraction, and irregular driving. Based on the different installation positions of the vehicle camera module in the terminal device, it can be divided into a front-view camera module, a side-view camera module, a rear-view camera module and a built-in camera module.
- Display screen 1204 may be used to display images, video, and the like.
- Display screen 1204 may include a display panel.
- the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode, or an active matrix organic light emitting diode (active-matrix organic light).
- emitting diode, AMOLED flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
- the terminal device may include one or Q display screens 1204 , where Q is a positive integer greater than one.
- the terminal device may implement a display function through a GPU, a display screen 1204, a processor 1201, and the like.
- the terminal device may be, for example, a vehicle, a smart phone, a smart home device, a smart manufacturing device, a robot, an unmanned aerial vehicle, or an intelligent transportation device (such as an AGV or an unmanned transportation vehicle, etc.).
- the present application provides an imaging method, please refer to the introduction of FIG. 13 .
- the imaging method can be applied to any camera module shown in the first embodiment. It can also be understood that the imaging method can be implemented based on any camera module shown in the first embodiment above.
- the imaging method includes the following steps:
- Step 1301 receiving light from the target object.
- This step 1301 can be implemented by the first optical lens assembly, which may refer to the above-mentioned introduction of the first optical lens assembly receiving light from the target object, and details are not repeated here.
- Step 1302 splitting the light of the target object to obtain the first light and the second light, and propagating the first light to the first image sensing component and the second light to the second image sensing component.
- the ratio of the intensities of the first light ray to the second light ray is less than 1.
- This step 1302 may be implemented by a light splitting component.
- a light splitting component For possible implementation manners, reference may be made to the foregoing related descriptions, which will not be repeated here.
- photoelectric conversion is performed on the first light to obtain the information of the first image; photoelectric conversion is performed on the second light to obtain the information of the second image; according to the information of the first image and the information of the second image information to generate an image of the target object.
- the information of the second image can be upsampled to obtain the information of the third image, and the information of the first image and the information of the third image can be fused to obtain the image of the target object, wherein the information of the third image is The corresponding third image has the same resolution as the first image corresponding to the information of the first image.
- the present application provides an imaging method, please refer to the introduction of FIG. 14 .
- the imaging method can be applied to any camera module shown in the second embodiment above. It can also be understood that the imaging method can be implemented based on any camera module shown in the second embodiment above.
- the imaging method includes the following steps:
- Step 1401 receiving light from the target object.
- Step 1402 propagating a third light ray to the first image sensing assembly, and propagating a fourth light ray to the second image sensing assembly.
- Both steps 1401 and 1402 can be implemented by the second optical lens assembly and the third optical lens assembly.
- photoelectric conversion may be performed on the received third light to obtain information of the fourth image; photoelectric conversion may be performed on the received fourth light to obtain information of the fifth image;
- the information of the image and the information of the fifth image are used to obtain the image of the target object.
- the information of the fifth image is up-sampled to obtain the information of the sixth image, and the information of the fourth image and the information of the sixth image are fused to obtain the image of the target object, wherein the information of the sixth image corresponds to the sixth image.
- the resolution of the image is the same as the resolution of the fourth image corresponding to the information of the fourth image.
- V2X vehicle-to-everything
- LTE-V long term evolution-vehicle
- V2V vehicle-to-everything
- Some steps in the methods in the embodiments of the present application may be implemented by means of hardware, and may also be implemented by means of a processor executing software instructions.
- Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), registers, hard disks, removable hard disks, CD-ROMs or known in the art in any other form of storage medium.
- An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium.
- the storage medium can also be an integral part of the processor.
- the processor and storage medium may reside in an ASIC.
- the ASIC may be located in a network device or in an end device.
- the processor and storage medium may also exist as discrete components.
Landscapes
- Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
A camera module, a terminal device, and an imaging method, capable of being applied to the fields of automatic driving, assisted driving, security monitoring, etc. The camera module comprises a first optical lens component, a light splitting component, and a first image sensing component and a second image sensing component having the same photosensitive area; the resolution of the first image sensing component is greater than the resolution of the second image sensing component; the first optical lens component can be used for receiving light from a target object; and the light splitting component is used for splitting the light propagating via the first optical lens component to obtain first light and second light, propagating the first light to the first image sensing component, and propagating the second light to the second image sensing component. For the camera module, on the basis of two image sensing components having the same photosensitive area and different resolutions, the dynamic range of the camera module can be improved. The method can be applied to the Internet of Vehicles, such as vehicle to everything V2X, inter-vehicle communication long term evolution technology LTE-V, and vehicle‑to‑vehicle V2V.
Description
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请要求在2021年03月24日提交中国专利局、申请号为202110311311.X、申请名称为“一种摄像头模组、终端设备及成像方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of the Chinese patent application filed on March 24, 2021 with the application number 202110311311.X and the application name "A camera module, terminal equipment and imaging method", the entire contents of which are approved by Reference is incorporated in this application.
本申请涉及摄像头模组技术领域,尤其涉及一种摄像头模组、终端设备及成像方法。The present application relates to the technical field of camera modules, and in particular, to a camera module, a terminal device and an imaging method.
随着科技的发展,各类设备上集成了越来越多的功能,如摄像功能。用户对摄像功能的要求越来越高,比如,用户需要获得更高质量的图像。下面以车载摄像机为例说明。With the development of technology, more and more functions, such as camera functions, are integrated on various devices. Users have higher and higher requirements for camera functions, for example, users need to obtain higher quality images. The following is an example of a vehicle-mounted camera.
车载摄像机在辅助驾驶和自动驾驶中的作用日益重要。通过车载摄像机拍摄到的图像可以反映车辆周围的环境,从而可以为安全驾驶提供必要信息。车载摄像机中的动态范围是较为重要的功能参数,其中,动态范围是指车载摄像机拍摄的同一个画面内,能正常显示细节的最亮和最暗物体的亮度值所包含的区间。因此,如何获得宽动态范围的高质量图像是车载摄像机目前亟需解决的技术问题。In-vehicle cameras play an increasingly important role in assisted driving and autonomous driving. The images captured by the on-board camera can reflect the environment around the vehicle, thus providing necessary information for safe driving. The dynamic range in the vehicle-mounted camera is a relatively important functional parameter. The dynamic range refers to the interval included in the brightness values of the brightest and darkest objects that can normally display details in the same picture captured by the vehicle-mounted camera. Therefore, how to obtain high-quality images with a wide dynamic range is a technical problem that needs to be solved urgently in the vehicle camera.
目前,大多数方案是通过优化车载摄像机中传感器所包括的像素来提高图像的动态范围,例如优化像素的曝光时间。但是由于对于像素的优化是有限的,因此对于摄像头模组的动态范围的提升也较为有限。At present, most of the solutions are to improve the dynamic range of the image by optimizing the pixels included in the sensor in the vehicle camera, such as optimizing the exposure time of the pixels. However, since the optimization of pixels is limited, the improvement of the dynamic range of the camera module is also limited.
发明内容SUMMARY OF THE INVENTION
本申请提供一种摄像头模组、终端设备及成像方法,用于提高摄像头模组的动态范围。The present application provides a camera module, a terminal device and an imaging method, which are used to improve the dynamic range of the camera module.
第一方面,本申请提供一种摄像头模组,该摄像头模组可包括第一光学镜头组件、分光组件、第一图像传感组件和第二图像传感组件,第一图像传感组件与第二图像传感组件的感光面积相同,第一图像传感组件的分辨率大于第二图像传感组件的分辨率。其中,第一光学镜头组件用于接收来自目标物体的光线;分光组件用于对经由第一光学镜头组件传播的光线进行分光,得到第一光线和第二光线,并向第一图像传感组件传播第一光线,向第二图像传感组件传播第二光线。In a first aspect, the present application provides a camera module, the camera module may include a first optical lens assembly, a light splitting assembly, a first image sensing assembly and a second image sensing assembly, the first image sensing assembly and the first image sensing assembly. The photosensitive areas of the two image sensing assemblies are the same, and the resolution of the first image sensing assembly is greater than the resolution of the second image sensing assembly. Wherein, the first optical lens assembly is used to receive light from the target object; the light splitting assembly is used to split the light propagating through the first optical lens assembly to obtain the first light and the second light, which are sent to the first image sensing assembly The first light is propagated, and the second light is propagated to the second image sensing component.
基于上述方案,由于图像传感组件的感光面积固定时,灵敏度和分辨率是此消彼长的,因此,第一图像传感组件的灵敏度小于第二图像传感组件的灵敏度。换言之,第一图像传感组件具有较高的分辨率且对光线的灵敏度较低,对亮光不容易过曝,可以识别高亮度的光线,可用于高亮目标的成像;第二图像传感组件具有较低的分辨率且对光线的灵敏度较高,可识别低亮度的光,可用于暗弱目标的成像。如此,可使得该摄像头模组既可以对暗弱目标成像又可以对高亮目标成像,从而有助于提高摄像头模组的动态范围。Based on the above solution, since the sensitivity and resolution are trade-offs when the photosensitive area of the image sensing assembly is fixed, the sensitivity of the first image sensing assembly is smaller than the sensitivity of the second image sensing assembly. In other words, the first image sensing component has high resolution and low sensitivity to light, is not easy to overexpose to bright light, can identify high-brightness light, and can be used for imaging of bright targets; the second image sensing component It has lower resolution and higher sensitivity to light, can identify low-brightness light, and can be used for imaging of dim targets. In this way, the camera module can image both dim targets and bright targets, thereby helping to improve the dynamic range of the camera module.
在一种可能的实现方式中,摄像头模组还可包括位于分光组件与第一图像传感组件之间的第一偏振片。In a possible implementation manner, the camera module may further include a first polarizer located between the light splitting component and the first image sensing component.
通过第一偏振片可以使得第一光线中与第一偏振片的偏振方向平行的光线通过,可进一步减弱射入第一图像传感组件的第一光线的强度,从而可进一步减弱第一图像传感组件对亮光的过曝,进而有助于进一步提高摄像头模组的动态范围。Through the first polarizer, the first light rays parallel to the polarization direction of the first polarizer can pass through, and the intensity of the first light rays entering the first image sensing component can be further weakened, thereby further weakening the transmission of the first image. Overexposure of the sensor component to bright light, which in turn helps to further improve the dynamic range of the camera module.
进一步,可选地,第一偏振片的偏振方向垂直于经由第一光学镜头组件传播的眩光的主偏振方向。Further, optionally, the polarization direction of the first polarizer is perpendicular to the main polarization direction of the glare propagating through the first optical lens assembly.
通过在高分辨率的第一图像传感组件的光路引入第一偏振片,并设计第一偏振片的偏振方向垂直于接收到的眩光的主偏振方向,可以消除或减弱进入高分辨率的第一图像传感组件的眩光,从而可使得摄像头模组获得消眩光且高动态范围的图像。By introducing a first polarizer in the optical path of the high-resolution first image sensing component, and designing the polarization direction of the first polarizer to be perpendicular to the main polarization direction of the received glare, it is possible to eliminate or weaken the first polarizer entering the high-resolution first image sensor. The glare of an image sensing component, so that the camera module can obtain a glare-free and high dynamic range image.
在一种可能的实现方式中,第一光线与第二光线的强度的比值小于1。如此,有助于进一步减弱进入高分辨率的第一图像传感组件的第一光线的强度,且可提升进入低分辨率的第二图像传感组件的第二光线的强度,从而有助于进一步提高摄像头模组的动态范围。In a possible implementation manner, the ratio of the intensity of the first light ray to the second light ray is less than 1. In this way, the intensity of the first light entering the high-resolution first image sensing component can be further reduced, and the intensity of the second light entering the low-resolution second image sensing component can be increased, thereby helping Further improve the dynamic range of the camera module.
在一种可能的实现方式中,第一图像传感组件用于对接收到的第一光线进行光电转换,得到第一图像的信息;第二图像传感组件用于对接收到的第二光线进行光电转换,得到第二图像的信息;第一图像的信息和第二图像的信息用于形成目标物体的图像。In a possible implementation manner, the first image sensing component is used for photoelectric conversion of the received first light to obtain information of the first image; the second image sensing component is used for the received second light Photoelectric conversion is performed to obtain information of the second image; the information of the first image and the information of the second image are used to form an image of the target object.
进一步,可选地,摄像头模组还包括第一处理组件,该第一处理组件可接收来自第一图像传感组件的第一图像的信息、以及接收来自第二图像传感组件的第二图像的信息,并根据第一图像的信息和第二图像的信息,生成目标物体的图像。Further, optionally, the camera module further includes a first processing component, the first processing component can receive the information of the first image from the first image sensing component and receive the second image from the second image sensing component information, and generate an image of the target object according to the information of the first image and the information of the second image.
具体地,第一处理组件用于对第二图像的信息进行上采样,得到第三图像的信息,第三图像的信息的对应的第三图像的分辨率与第一图像的信息对应的第一图像的分辨率相同;融合第一图像的信息与第三图像的信息,得到目标物体的图像。Specifically, the first processing component is configured to upsample the information of the second image to obtain the information of the third image, and the resolution of the third image corresponding to the information of the third image is the first image corresponding to the information of the first image. The resolutions of the images are the same; the information of the first image and the information of the third image are fused to obtain the image of the target object.
通过上述第一处理组件对第一图像的信息和第二图像的信息的融合,得到的目标物体的图像具有较高的质量。Through the fusion of the information of the first image and the information of the second image by the first processing component, the obtained image of the target object has higher quality.
第二方面,本申请提供一种终端设备,该终端设备可上述第一方面或第一方面中的任意一种摄像头模组。In a second aspect, the present application provides a terminal device, and the terminal device can be any camera module in the first aspect or the first aspect.
示例性地,终端设备例如可以是智能手机、车辆、智能家居设备、智能制造设备、机器人、无人机、测绘设备或智能运输设备。Exemplarily, the terminal device may be, for example, a smartphone, a vehicle, a smart home device, a smart manufacturing device, a robot, a drone, a surveying and mapping device, or an intelligent transportation device.
第三方面,本申请提供一种成像方法,该成像方法可应用于摄像头模组,该摄像头模组可包括第一图像传感组件和第二图像传感组件,第一图像传感组件与第二图像传感组件的感光面积相同,第一图像传感组件的分辨率大于第二图像传感组件的分辨率。该方法包括接收来自目标物体的光线;对目标物体的光线进行分光,得到第一光线和第二光线,并向第一图像传感组件传播第一光线,向第二图像传感组件传播第二光线。In a third aspect, the present application provides an imaging method, the imaging method can be applied to a camera module, the camera module can include a first image sensing component and a second image sensing component, the first image sensing component and the first image sensing component The photosensitive areas of the two image sensing assemblies are the same, and the resolution of the first image sensing assembly is greater than the resolution of the second image sensing assembly. The method includes receiving light from a target object; splitting the light from the target object to obtain a first light and a second light, and propagating the first light to the first image sensing component, and propagating the second light to the second image sensing component light.
在一种可能的实现方式中,摄像头模组还包括位于分光组件与第一图像传感组件之间的第一偏振片。In a possible implementation manner, the camera module further includes a first polarizer located between the light splitting component and the first image sensing component.
在一种可能的实现方式中,第一偏振片的偏振方向垂直于经由第一光学镜头组件传播的眩光的主偏振方向。In a possible implementation manner, the polarization direction of the first polarizer is perpendicular to the main polarization direction of the glare light propagating through the first optical lens assembly.
在一种可能的实现方式中,第一光线与第二光线的强度的比值小于1。In a possible implementation manner, the ratio of the intensity of the first light ray to the second light ray is less than 1.
在一种可能的实现方式中,对第一光线进行光电转换,得到第一图像的信息;对第二光线进行光电转换,得到第二图像的信息;根据第一图像的信息和第二图像的信息,生成目标物体的图像。In a possible implementation manner, photoelectric conversion is performed on the first light to obtain the information of the first image; photoelectric conversion is performed on the second light to obtain the information of the second image; according to the information of the first image and the information of the second image information to generate an image of the target object.
进一步,可选地,可对第二图像的信息进行上采样,得到第三图像的信息,融合第一 图像的信息与第三图像的信息,得到目标物体的图像,其中,第三图像的信息对应的第三图像与第一图像的信息对应的第一图像的分辨率相同。Further, optionally, the information of the second image can be upsampled to obtain the information of the third image, and the information of the first image and the information of the third image can be fused to obtain the image of the target object, wherein the information of the third image is The corresponding third image has the same resolution as the first image corresponding to the information of the first image.
在一种可能的实现方式中,第三方面可应用的摄像头模组可以是上述第一方面或第一方面中的任意一种摄像头模组。In a possible implementation manner, the camera module applicable to the third aspect may be the first aspect or any camera module in the first aspect.
上述第二方面和第三方面中任一方面可以达到的技术效果可以参照上述第一方面中有益效果的描述,此处不再重复赘述。For the technical effects that can be achieved in any one of the second aspect and the third aspect, reference may be made to the description of the beneficial effects in the first aspect, which will not be repeated here.
第四方面,本申请提供一种摄像头模组,包括第二光学镜头组件、第三光学镜头组件、第一图像传感组件和第二图像传感组件,第一图像传感组件与第二图像传感组件的感光面积相同,第一图像传感组件的分辨率大于第二图像传感组件的分辨率;第二光学镜头组件用于接收来自目标物体的光线,并向第一图像传感组件传播第三光线;第三光学镜头组件用于接收来自目标物体的光线,并向第二图像传感组件传播第四光线。In a fourth aspect, the present application provides a camera module including a second optical lens assembly, a third optical lens assembly, a first image sensing assembly and a second image sensing assembly, the first image sensing assembly and the second image sensing assembly The photosensitive areas of the sensing assemblies are the same, and the resolution of the first image sensing assembly is greater than the resolution of the second image sensing assembly; the second optical lens assembly is used to receive light from the target object and transmit light to the first image sensing assembly. Propagating the third light; the third optical lens assembly is used for receiving the light from the target object, and propagating the fourth light to the second image sensing assembly.
基于该方案,通过第二光学镜头组件与第三光学镜头组件,摄像头模组可以实现双通道成像,且第二光学镜头组件可将来自目标物体的第三光线传播至第一图像传感组件,第三光学镜头组件可将来自目标物体的第四光线传播至第二图像传感组件。由于图像传感组件的感光面积固定时,灵敏度和分辨率是此消彼长的,因此,第一图像传感组件的灵敏度小于第二图像传感组件的灵敏度。也就是说,第一图像传感组件具有较高的分辨率且对光线的灵敏度较低,对亮光不容易过曝,可以识别高亮度的光线,可用于高亮目标的成像;第二图像传感组件具有较低的分辨率且对光线的灵敏度较高,可识别低亮度的光,可用于暗弱目标的成像。如此,可使得该摄像头模组既可以对暗弱目标成像又可以对高亮目标成像,从而有助于提高摄像头模组的动态范围。Based on this solution, through the second optical lens assembly and the third optical lens assembly, the camera module can realize dual-channel imaging, and the second optical lens assembly can transmit the third light from the target object to the first image sensing assembly, The third optical lens assembly can transmit the fourth light from the target object to the second image sensing assembly. Since the sensitivity and resolution are trade-offs when the photosensitive area of the image sensing assembly is fixed, the sensitivity of the first image sensing assembly is smaller than the sensitivity of the second image sensing assembly. That is to say, the first image sensing component has high resolution and low sensitivity to light, is not easily overexposed to bright light, can identify high-brightness light, and can be used for imaging of bright targets; The sensor component has lower resolution and higher sensitivity to light, can recognize low-brightness light, and can be used for imaging of dim targets. In this way, the camera module can image both dim targets and bright targets, thereby helping to improve the dynamic range of the camera module.
在一种可能的实现方式中,摄像头模组还包括位于第二光学镜头组件与第一图像传感组件之间的第二偏振片。In a possible implementation manner, the camera module further includes a second polarizer located between the second optical lens assembly and the first image sensing assembly.
通过第二偏振片可以使得第三光线中与第二偏振片的偏振方向平行的光线通过,可进一步减弱射入第一图像传感组件的第三光线的强度,从而可进一步减弱第一图像传感组件对亮光的过曝,进而有助于进一步提高摄像头模组的动态范围。Through the second polarizer, the third light rays parallel to the polarization direction of the second polarizer can pass through, and the intensity of the third light rays entering the first image sensing component can be further weakened, thereby further weakening the transmission of the first image. Overexposure of the sensor component to bright light, which in turn helps to further improve the dynamic range of the camera module.
进一步,可选地,第二偏振片的偏振方向垂直于经由第二光学镜头组件传播的眩光的主偏振方向。Further, optionally, the polarization direction of the second polarizer is perpendicular to the main polarization direction of the glare light propagating through the second optical lens assembly.
通过在高分辨率的第一图像传感组件的光路引入第二偏振片,并设计第二偏振片的偏振方向垂直于接收到的眩光的主偏振方向,可以消除或减弱进入高分辨率的第一图像传感组件的眩光,从而可使得摄像头模组获得消眩光且高动态范围的图像。By introducing a second polarizer in the optical path of the high-resolution first image sensing component, and designing the polarization direction of the second polarizer to be perpendicular to the main polarization direction of the received glare, it is possible to eliminate or weaken the entry into the high-resolution first image sensor. The glare of an image sensing component, so that the camera module can obtain a glare-free and high dynamic range image.
在一种可能的实现方式中,第二光学镜头组件的光圈数大于第三光学镜头组件的光圈数。In a possible implementation manner, the aperture number of the second optical lens assembly is greater than the aperture number of the third optical lens assembly.
不同通道的通光量(即光线的强度)可通过对应的光学镜头组件的光圈控制,通过设置第二光学镜头组件的光圈数大于第三光学镜头组件的光圈数,可使得进入高分辨率的第一图像传感组件的第三光线的强度相较于进入低分辨率的第二图像传感组件的第四光线的强度,即光圈数大的第二光学镜头组件配合高分辨率的第一图像传感组件,光圈数小的第三光学镜头组件配合低分辨率的第二图像传感组件,从而有助于进一步提高摄像头模组的动态范围。The amount of light passing through different channels (that is, the intensity of light) can be controlled by the aperture of the corresponding optical lens assembly. The intensity of the third light of an image sensing element is compared with the intensity of the fourth light entering the second image sensing element with low resolution, that is, the second optical lens element with a large aperture is matched with the first image with high resolution In the sensing assembly, the third optical lens assembly with a small aperture number cooperates with the second image sensing assembly with low resolution, thereby helping to further improve the dynamic range of the camera module.
在一种可能的实现方式中,第二光学镜头组件与第三光学镜头组件的焦距相同。如此,可使得通过第二光学镜头组件的第三光线与通过第三光学镜头组件的第四光线汇聚于相 同的平面,从而可使得第一图像传感组件与第二图像传感组件设置于相同的基板上,进而有助于简化摄像头模组的装配。In a possible implementation manner, the second optical lens assembly and the third optical lens assembly have the same focal length. In this way, the third light passing through the second optical lens assembly and the fourth light passing through the third optical lens assembly can be converged on the same plane, so that the first image sensing assembly and the second image sensing assembly can be arranged in the same plane on the substrate, which helps to simplify the assembly of the camera module.
在一种可能的实现方式中,第一图像传感组件用于对接收到的第三光线进行光电转换,得到第四图像的信息;第二图像传感组件用于对接收到的第四光线进行光电转换,得到第五图像的信息;其中,第四图像的信息和第五图像的信息用于形成目标物体的图像。In a possible implementation manner, the first image sensing component is used to perform photoelectric conversion on the received third light to obtain information of the fourth image; the second image sensing component is used for the received fourth light Photoelectric conversion is performed to obtain information of the fifth image; wherein, the information of the fourth image and the information of the fifth image are used to form an image of the target object.
进一步,可选地,摄像头模组还包括第二处理组件,用于接收来自第一图像传感组件的第四图像的信息、以及接收来自第二图像传感组件的第五图像的信息,并根据第四图像的信息和第五图像的信息,得到目标物体的图像。Further, optionally, the camera module further includes a second processing component for receiving information of the fourth image from the first image sensing component and receiving information from the fifth image from the second image sensing component, and According to the information of the fourth image and the information of the fifth image, an image of the target object is obtained.
具体地,第二处理组件用于对第五图像的信息进行上采样,得到第六图像的信息,第六图像的信息对应的第六图像的分辨率与第四图像的信息对应的第四图像的分辨率相同,融合第四图像的信息和第六图像的信息,得到目标物体的图像。Specifically, the second processing component is configured to upsample the information of the fifth image to obtain the information of the sixth image, and the resolution of the sixth image corresponding to the information of the sixth image is the fourth image corresponding to the information of the fourth image The resolution is the same, and the information of the fourth image and the information of the sixth image are fused to obtain the image of the target object.
第五方面,本申请提供一种终端设备,该终端设备可上述第四方面或第四方面中的任意一种摄像头模组。In a fifth aspect, the present application provides a terminal device, and the terminal device can be any camera module in the fourth aspect or the fourth aspect.
示例性地,终端设备例如可以是智能手机、车辆、智能家居设备、智能制造设备、机器人、无人机、测绘设备或智能运输设备。Exemplarily, the terminal device may be, for example, a smartphone, a vehicle, a smart home device, a smart manufacturing device, a robot, a drone, a surveying and mapping device, or an intelligent transportation device.
第六方面,本申请提供一种成像方法,该方法可应用于摄像头模组,该摄像头模组可包括第一图像传感组件和第二图像传感组件,第一图像传感组件与第二图像传感组件的感光面积相同,第一图像传感组件的分辨率大于第二图像传感组件的分辨率。该方法包括接收来自目标物体的光线,并向第一图像传感组件传播第三光线,向第二图像传感组件传播第四光线。In a sixth aspect, the present application provides an imaging method, which can be applied to a camera module, and the camera module can include a first image sensing assembly and a second image sensing assembly, the first image sensing assembly and the second image sensing assembly. The photosensitive areas of the image sensing assemblies are the same, and the resolution of the first image sensing assembly is greater than the resolution of the second image sensing assembly. The method includes receiving light from a target object, and propagating a third light to a first image sensing assembly and a fourth light to a second image sensing assembly.
在一种可能的实现方式中,摄像头模组还包括位于第二光学镜头组件与第一图像传感组件之间的第二偏振片。In a possible implementation manner, the camera module further includes a second polarizer located between the second optical lens assembly and the first image sensing assembly.
进一步,可选地,第二偏振片的偏振方向垂直于经由第二光学镜头组件传播的眩光的主偏振方向。Further, optionally, the polarization direction of the second polarizer is perpendicular to the main polarization direction of the glare light propagating through the second optical lens assembly.
在一种可能的实现方式中,第二光学镜头组件的光圈数大于第三光学镜头组件的光圈数。In a possible implementation manner, the aperture number of the second optical lens assembly is greater than the aperture number of the third optical lens assembly.
在一种可能的实现方式中,第二光学镜头组件与第三光学镜头组件的焦距相同。In a possible implementation manner, the second optical lens assembly and the third optical lens assembly have the same focal length.
在一种可能的实现方式中,可对接收到的第三光线进行光电转换,得到第四图像的信息;对接收到的第四光线进行光电转换,得到第五图像的信息;并根据第四图像的信息和第五图像的信息,得到目标物体的图像。In a possible implementation manner, photoelectric conversion may be performed on the received third light to obtain information of the fourth image; photoelectric conversion may be performed on the received fourth light to obtain information of the fifth image; The information of the image and the information of the fifth image are used to obtain the image of the target object.
具体地,对第五图像的信息进行上采样,得到第六图像的信息,融合第四图像的信息和第六图像的信息,得到目标物体的图像,其中,第六图像的信息对应的第六图像的分辨率与第四图像的信息对应的第四图像的分辨率相同。Specifically, the information of the fifth image is up-sampled to obtain the information of the sixth image, and the information of the fourth image and the information of the sixth image are fused to obtain the image of the target object, wherein the information of the sixth image corresponds to the sixth image. The resolution of the image is the same as the resolution of the fourth image corresponding to the information of the fourth image.
在一种可能的实现方式中,第六方面中应用的摄像头模组可以是上述第四方面或第四方面中的任意一种摄像头模组。In a possible implementation manner, the camera module applied in the sixth aspect may be any one of the fourth aspect or the fourth aspect.
上述第五方面和第六方面中任一方面可以达到的技术效果可以参照上述第四方面中有益效果的描述,此处不再重复赘述。For the technical effects that can be achieved in any one of the fifth aspect and the sixth aspect, reference may be made to the description of the beneficial effects in the fourth aspect, which will not be repeated here.
第七方面,本申请提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令被摄像头模组执行时,使得该摄像头模组执行上述第三方面或第三方面的任意可能的实现方式中的方法、或者使得该摄像头模组执行上述第六 方面或第六方面的任意可能的实现方式中的方法。In a seventh aspect, the present application provides a computer-readable storage medium in which a computer program or instruction is stored, and when the computer program or instruction is executed by the camera module, the camera module is made to perform the above-mentioned third aspect Or the method in any possible implementation manner of the third aspect, or the camera module is made to execute the above sixth aspect or the method in any possible implementation manner of the sixth aspect.
图1a为本申请提供的一种像素的尺寸与分辨率的关系示意图;1a is a schematic diagram of the relationship between the size and resolution of a pixel provided by the application;
图1b为本申请提供的另一种像素的尺寸与分辨率的关系示意图;1b is a schematic diagram of the relationship between the size and resolution of another pixel provided by the application;
图1c为本申请提供的一种自然光的偏振态的示意图;Fig. 1c is the schematic diagram of the polarization state of a kind of natural light provided by the application;
图1d为本申请提供的一种线偏振光的偏振态的示意图;1d is a schematic diagram of the polarization state of a linearly polarized light provided by the application;
图1e为本申请提供的另一种线偏振光的偏振态的示意图;1e is a schematic diagram of the polarization state of another linearly polarized light provided by the application;
图1f为本申请提供的一种部分偏振光的偏振态的示意图;Figure 1f is a schematic diagram of the polarization state of a partially polarized light provided by the application;
图1g为本申请提供的一种图像融合的原理示意图;FIG. 1g is a schematic diagram of the principle of a kind of image fusion provided by the application;
图2为本申请提供的一种偏振片抗眩光的原理示意图;Fig. 2 is the principle schematic diagram of a kind of polarizer anti-glare provided by the application;
图3a为本申请提供的一种摄像头模组可能的应用场景示意图;3a is a schematic diagram of a possible application scenario of a camera module provided by the application;
图3b为本申请提供的另一种摄像头模组可能的应用场景示意图;3b is a schematic diagram of another possible application scenario of the camera module provided by the application;
图4为本申请提供的一种摄像头模组的结构示意图;4 is a schematic structural diagram of a camera module provided by the application;
图5a为本申请提供的一种第一光学镜头组件的结构示意图;5a is a schematic structural diagram of a first optical lens assembly provided by the application;
图5b为本申请提供的另一种第一光学镜头组件的结构示意图;5b is a schematic structural diagram of another first optical lens assembly provided by the application;
图6本申请提供的一种分光组件的分光原理示意图;6 is a schematic diagram of the spectroscopic principle of a spectroscopic component provided by the present application;
图7为本申请提供的一种第一图像传感组件与第二图像传感组件的关系示意图;7 is a schematic diagram of the relationship between a first image sensing assembly and a second image sensing assembly provided by the application;
图8为本申请提供的一种第一图像与第二图像融合的过程示意图;8 is a schematic diagram of a process of fusion of a first image and a second image provided by the present application;
图9为本申请提供的另一种摄像头模组的结构示意图;9 is a schematic structural diagram of another camera module provided by the application;
图10为本申请提供的又一种摄像头模组的结构示意图;10 is a schematic structural diagram of another camera module provided by the application;
图11为本申请提供的又一种摄像头模组的结构示意图;11 is a schematic structural diagram of another camera module provided by the application;
图12为本申请提供的一种终端设备的结构示意图;12 is a schematic structural diagram of a terminal device provided by the application;
图13为本申请提供的一种成像方法的方法流程示意图;13 is a schematic flowchart of an imaging method provided by the application;
图14为本申请提供的另一种成像方法的方法流程示意图。FIG. 14 is a schematic flowchart of another imaging method provided by the present application.
下面将结合附图,对本申请实施例进行详细描述。The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
以下,对本申请中的部分用语进行解释说明。需要说明的是,这些解释是为了便于本领域技术人员理解,并不是对本申请所要求的保护范围构成限定。Hereinafter, some terms used in this application will be explained. It should be noted that these explanations are for the convenience of those skilled in the art to understand, and do not constitute a limitation on the protection scope required by the present application.
一、眩光(Dazzle)1. Dazzle
眩光是指视野中由于不适宜亮度分布,在空间或时间上存在极端亮度对比,以致引起视觉不舒适和降低物体可见度的视觉条件。视野内产生人眼无法适应的光亮感觉,可能引起厌恶、不舒服甚或丧失明视度。在视野中某一局部地方出现过高的亮度或前后发生过大的亮度变化。例如,汽车行驶在路面,被雨水撒过的路面是一个良好的镜面,强光源(如太阳)经过镜面反射进入到车载摄像机会产生强烈的眩光;再比如,高温下的路面,在垂直于路面的方向上,由于温度梯度的存在,折射率越靠近路面越低,光线斜入射路面同样会弯折,此时的路面等效为一个良好的镜面,强光源经过镜面反射进入车载摄像机产生眩光。Glare refers to the visual conditions that cause visual discomfort and reduce the visibility of objects due to the extreme brightness contrast in space or time due to unsuitable brightness distribution in the field of view. A bright sensation in the visual field that the human eye cannot adapt to, which may cause disgust, discomfort or even loss of vision. Excessive brightness in a certain part of the field of view or excessive brightness changes before and after. For example, when a car is driving on the road, the road that has been sprinkled by rain is a good mirror surface, and strong light sources (such as the sun) are reflected by the mirror and enter the vehicle camera, which will cause strong glare; In the direction of , due to the existence of temperature gradient, the refractive index is lower as it is closer to the road surface, and the light incident on the road surface will also bend. At this time, the road surface is equivalent to a good mirror surface, and the strong light source is reflected by the mirror surface and enters the vehicle camera to generate glare.
二、动态范围2. Dynamic range
动态范围是摄像头模组的一个重要参数,是指摄像头模组拍摄的同一个画面内,能正常显示细节的最亮和最暗物体的亮度值所包含的区间。动态范围越大,过亮或过暗的物体在同一个画面中都能正常显示的程度也就越大。The dynamic range is an important parameter of the camera module, which refers to the interval included in the brightness values of the brightest and darkest objects that can normally display details in the same picture captured by the camera module. The greater the dynamic range, the greater the degree to which objects that are too bright or too dark can be displayed properly in the same frame.
三、上采样3. Upsampling
上采样也可称为放大图像或图像插值,主要目的是放大图像,从而可以获得更高分辨率的图像。Upsampling can also be called image upscaling or image interpolation, and the main purpose is to upscale the image so that a higher resolution image can be obtained.
上采样原理:图像放大通常采用内插值方法,即在原有图像像素的基础上在像素点之间采用合适的插值算法插入新的元素。其中,插值算法可以是传统插值、基于边缘图像的插值或基于区域的图像插值等,本申请对此不做限定。Upsampling principle: Image enlargement usually adopts the interpolation method, that is, on the basis of the original image pixels, a suitable interpolation algorithm is used to insert new elements between the pixels. The interpolation algorithm may be traditional interpolation, edge image-based interpolation, or region-based image interpolation, etc., which is not limited in this application.
四、像素4. Pixels
像素可指构成图像传感器的成像区域的最小单元。其中,像素的尺寸是指像素的物理尺寸,即相邻像素中心之间的距离。A pixel may refer to the smallest unit that constitutes an imaging area of an image sensor. Among them, the size of the pixel refers to the physical size of the pixel, that is, the distance between the centers of adjacent pixels.
五、分辨率5. Resolution
分辨率可指图像传感器上可用于成像的最大像素(即感光单元)的数量。通常以横向像素的数量和纵向像素的数量的乘积来衡量,即分辨率=水平像素数×竖直像素数。Resolution can refer to the maximum number of pixels (ie, photosensitive cells) available for imaging on an image sensor. It is usually measured by the product of the number of horizontal pixels and the number of vertical pixels, that is, resolution=number of horizontal pixels×number of vertical pixels.
需要说明的是,在相同感光面积(或称为相同靶面)下,分辨率与像素的尺寸是此消彼长的。参考图1a和图1b,在相同感光面积下像素的尺寸与分辨率的关系。图1a的像素的尺寸为a,分辨率为4×4;图1b的像素的尺寸为a/2,分辨率为8×8。由图1a和图1b可以确定,即像素的尺寸越小,分辨率越高;像素的尺寸越大,分辨率越低。It should be noted that under the same photosensitive area (or called the same target surface), the resolution and the size of the pixel are trade-offs. Referring to Figure 1a and Figure 1b, the relationship between pixel size and resolution under the same photosensitive area. The size of the pixel in FIG. 1a is a and the resolution is 4×4; the size of the pixel in FIG. 1b is a/2 and the resolution is 8×8. It can be determined from Fig. 1a and Fig. 1b that the smaller the size of the pixel, the higher the resolution; the larger the size of the pixel, the lower the resolution.
六、最低照度6. Minimum Illumination
最低照度是指图像传感器对环境光线的敏感程度,或者说是图像传感器正常成像时所需要的最暗光线。Minimum illumination refers to the sensitivity of the image sensor to ambient light, or the darkest light required by the image sensor for normal imaging.
七、光圈7. Aperture
光圈可用于控制进入光学镜头的光线的多少,即光圈用于决定光学镜头的进光量。光圈的大小通常用F数表示,记作F/,F数又称光圈数。大光圈的光学镜头,F数小,即光圈数小;小光圈的光学镜头,F数大,即光圈数大。The aperture can be used to control the amount of light entering the optical lens, that is, the aperture is used to determine the amount of light entering the optical lens. The size of the aperture is usually expressed by the F number, denoted as F/, and the F number is also called the aperture number. A large aperture optical lens has a small F-number, that is, a small aperture; an optical lens with a small aperture has a large F-number, that is, a large aperture.
在快门不变的情况下,F数越小,光圈越大,进光量越多,画面比较亮;F数越大,光圈越小,进光量越少,画面比较暗。Under the condition that the shutter remains unchanged, the smaller the F-number, the larger the aperture, the more light entering, and the brighter the picture; the larger the F-number, the smaller the aperture, the less light entering, and the picture is darker.
八、偏振片Eight, polarizer
偏振片亦称为偏光片,是一种光滤波器。偏振片用于吸收或反射一个偏振方向的光,透射另一个正交偏振方向的光。光的透射率与其偏振状态直接相关。偏振片通常可分为吸收型偏振片和反射型偏振片(reflective polarizer,RP)。吸收型偏振片可对入射的线偏振光的正交偏振分量之一强烈吸收,而对另一分量则吸收较弱。反射型偏振片可以透射某一方向的线偏振光且可反射偏振方向与被透射的方向垂直的光。吸收型偏振片例如可以是二色性偏振片,反射型偏振片例如可以是利用双折射的偏振分束器。Polarizers, also known as polarizers, are a type of optical filter. Polarizers are used to absorb or reflect light in one polarization direction and transmit light in another orthogonal polarization direction. The transmittance of light is directly related to its polarization state. Polarizers are generally classified into absorbing polarizers and reflective polarizers (RP). Absorptive polarizers strongly absorb one of the orthogonally polarized components of incident linearly polarized light, while the other component is less absorbed. Reflective polarizers can transmit linearly polarized light in a certain direction and can reflect light whose polarization direction is perpendicular to the transmitted direction. The absorbing polarizer may be, for example, a dichroic polarizer, and the reflective polarizer may be, for example, a polarizing beam splitter utilizing birefringence.
对于自然光(可参见图1c),入射偏振片后,出射的光变为线偏振光,出射光的能量变为入射光的能量的50%。对于线偏振光(可参见图1d或图1e)入射偏振片后,出射光仍为线偏振光,出射光的能量变为I
0×cosθ^2,其中,I
0表示入射的线偏振光的能量,θ表示入射的线偏振光的偏振方向与线偏振片的起偏方向之间的夹角。对于部分偏振光(可参 见图1f)入射偏振片后,出射光变为线偏振光,出射光的能量出现衰减,并随起偏角呈现周期性变化。
For natural light (see Figure 1c), after incident on the polarizer, the outgoing light becomes linearly polarized light, and the energy of the outgoing light becomes 50% of the energy of the incident light. For linearly polarized light (see Figure 1d or Figure 1e), after entering the polarizer, the outgoing light is still linearly polarized light, and the energy of the outgoing light becomes I 0 ×cosθ^2, where I 0 represents the incident linearly polarized light. Energy, θ represents the angle between the polarization direction of the incident linearly polarized light and the polarization direction of the linear polarizer. For partially polarized light (see Figure 1f), after entering the polarizer, the outgoing light becomes linearly polarized light, and the energy of the outgoing light attenuates and changes periodically with the polarization angle.
九、偏振方向9. Polarization direction
偏振方向亦称为偏振化方向或起偏方向。这是由于偏振片中存在着某种特征性的方向,叫做偏振化方向,偏振片只允许平行于偏振化方向的光通过,同时吸收或反射垂直于该方向的光。The polarization direction is also referred to as the polarization direction or the polarization direction. This is because there is a certain characteristic direction in the polarizer, called the polarization direction. The polarizer only allows light parallel to the polarization direction to pass through, while absorbing or reflecting light perpendicular to the direction.
十、图像融合(Image fusion)10. Image fusion
图像融合是一种图像处理技术,是指将多源信道所采集到的关于同一目标的图像数据经过图像处理和特定算法的计算等,最大限度的提取各自信道中的有利信息,最终合成高质量(例如亮度、清晰度、色彩)的图像,融合后的图像相较于原始图像具有较高的分辨率。Image fusion is an image processing technology, which refers to the image data collected by multiple source channels about the same target through image processing and specific algorithm calculation, etc., to maximize the extraction of favorable information in each channel, and finally to synthesize high quality. (eg brightness, sharpness, color), the fused image has a higher resolution than the original image.
请参阅图1g,为本申请提供的一种图像融合的原理示意图。由于图像融合可以利用两幅(或多幅)图像在时空上的相关性及信息上的互补性,因此,融合后的新的图像对场景有更全面、清晰的描述,从而更有利于探测装置的识别和探测。需要说明的是,图像融合通常需要确保待融合的各个图像已配准好且像素位宽一致。Please refer to FIG. 1g , which is a schematic diagram of the principle of image fusion provided by the present application. Since image fusion can utilize the spatial-temporal correlation and information complementarity of two (or multiple) images, the fused new image has a more comprehensive and clear description of the scene, which is more beneficial to the detection device identification and detection. It should be noted that, image fusion usually needs to ensure that each image to be fused has been registered and the pixel bit width is consistent.
前文介绍了本申请所涉及到的一些用语,下面介绍本申请涉及的技术特征。需要说明的是,这些解释是为了便于本领域技术人员理解,并不是对本申请所要求的保护范围构成限定。Some terms involved in the present application are introduced above, and the technical features involved in the present application are introduced below. It should be noted that these explanations are for the convenience of those skilled in the art to understand, and do not constitute a limitation on the protection scope required by the present application.
如图2所示,为本申请提供的一种偏振片抗眩光的原理示意图。当入射角θ为布儒斯特角时,反射光为线偏振光,透射光近似为自然光。当入射角θ为非布儒斯特角入射,反射光和透射光均为部分偏振光。其中,布儒斯特角=arctan(n2/n1),其中,n1表示入射光所在介质的折射率,n2表示折射光所在介质的折射率。由菲涅尔反射原理可知,路面反射的眩光为部分偏振光或者线偏振光,可以通过调整偏振片的偏振方向来减弱或消除反射的眩光。应理解,当眩光为部分偏振光时,眩光的主偏振方向(或称为偏振长轴)与路面的夹角是可确定的。As shown in FIG. 2 , a schematic diagram of the principle of anti-glare of a polarizer provided by the present application. When the incident angle θ is the Brewster angle, the reflected light is linearly polarized light, and the transmitted light is approximately natural light. When the incident angle θ is a non-Brewster angle incident, both the reflected light and the transmitted light are partially polarized light. Among them, Brewster's angle=arctan(n2/n1), wherein, n1 represents the refractive index of the medium where the incident light is located, and n2 represents the refractive index of the medium where the refracted light is located. According to the principle of Fresnel reflection, the glare reflected by the road surface is partially polarized light or linearly polarized light, and the reflected glare can be weakened or eliminated by adjusting the polarization direction of the polarizer. It should be understood that when the glare is partially polarized light, the angle between the main polarization direction (or called the long axis of polarization) of the glare and the road surface can be determined.
基于上述内容,下面给出了本申请中的摄像头模组可能的应用场景。示例性地,摄像头模组可被安装在车辆(例如无人车、智能车、电动车、数字汽车等)上,作为车载摄像机,如图3a。车载摄像机可以实时或周期性地获取周围物体的距离等测量信息,从而可为车道纠偏、车距保持、倒车等操作提供必要信息。由于车载摄像机可以实现:a)目标识别与分类,例如各类车道线识别、红绿灯识别以及交通标志识别等;b)可通行空间检测(freespace),例如,可对车辆行驶的安全边界(可行驶区域)进行划分,主要对车辆、普通路边沿、侧石边沿、没有障碍物可见的边界、未知边界进行划分等;c)对横向移动目标的探测能力,例如对十字路口横穿的行人以及车辆的探测和追踪;d)定位与地图创建,例如基于视觉同步定位与地图构建(simultaneous localization and mapping,SLAM)技术的定位与地图创建。因此,车载摄像机已被广泛应用于无人驾驶、自动驾驶、辅助驾驶、智能驾驶、网联车、安防监控、测绘等领域。应理解,摄像头模组可与高级驾驶辅助系统(advanced driving assistant system,ADAS)结合。Based on the above content, possible application scenarios of the camera module in this application are given below. Exemplarily, the camera module can be installed on a vehicle (eg, an unmanned vehicle, a smart vehicle, an electric vehicle, a digital vehicle, etc.) as a vehicle-mounted camera, as shown in FIG. 3a. The vehicle-mounted camera can obtain measurement information such as the distance of surrounding objects in real time or periodically, so as to provide necessary information for operations such as lane correction, vehicle distance maintenance, and reversing. Because the vehicle camera can realize: a) target recognition and classification, such as various lane line recognition, traffic light recognition and traffic sign recognition, etc.; area), mainly for vehicles, ordinary road edges, side stone edges, boundaries without visible obstacles, unknown boundaries, etc.; c) The ability to detect laterally moving targets, such as pedestrians and vehicles crossing the intersection d) Localization and map creation, such as localization and map creation based on simultaneous visual localization and mapping (SLAM) technology. Therefore, in-vehicle cameras have been widely used in unmanned, autonomous, assisted driving, intelligent driving, connected vehicles, security monitoring, surveying and mapping and other fields. It should be understood that the camera module can be combined with an advanced driving assistant system (ADAS).
需要说明的是,如上应用场景只是举例,本申请所提供的摄像头模组还可以应用在多种其它场景下,而不限于上述示例出的场景。例如,摄像头模组还可应用于终端设备或设置于终端设备的部件中,终端设备例如可以是智能手机、智能家居设备、智能制造设备、 机器人、无人机或智能运输设备(如自动导引运输车(automated guided vehicle,AGV)或者无人运输车等)等。再比如,摄像头模组也可被安装在无人机上,作为机载摄像机等。再比如,摄像头模组也可以安装在路边交通设备(如路侧单元(road side unit,RSU))上,作为路边交通摄像机,可参见图3b,从而可实现智能车路协同。It should be noted that the above application scenarios are just examples, and the camera module provided in this application can also be applied in various other scenarios, and is not limited to the scenarios exemplified above. For example, the camera module can also be applied to terminal equipment or set in components of the terminal equipment. Transport vehicle (automated guided vehicle, AGV) or unmanned transport vehicle, etc.). For another example, the camera module can also be installed on the drone as an onboard camera. For another example, the camera module can also be installed on roadside traffic equipment (such as a roadside unit (RSU)) as a roadside traffic camera, as shown in Figure 3b, so as to realize intelligent vehicle-road collaboration.
其中,动态范围是摄像头模组比较重要的功能参数。目前摄像头模组的动态范围仅可达到80~120分贝(dB),但是自然界中目标的动态范围可达180dB,因此,摄像头模组的动态范围还需要进行提升。目前,主要是通过优化摄像头模组中的图像传感器所包括的像素来提升摄像头模组的动态范围的。但是由于对于像素的优化是有限的,故对于摄像头模组的动态范围的提升也较为有限。因此,如何获得宽动态范围的高质量图像是摄像头模组亟需解决的技术问题。Among them, the dynamic range is a more important functional parameter of the camera module. At present, the dynamic range of the camera module can only reach 80-120 decibels (dB), but the dynamic range of the target in nature can reach 180dB. Therefore, the dynamic range of the camera module needs to be improved. At present, the dynamic range of the camera module is mainly improved by optimizing the pixels included in the image sensor in the camera module. However, since the optimization of pixels is limited, the improvement of the dynamic range of the camera module is also limited. Therefore, how to obtain high-quality images with a wide dynamic range is an urgent technical problem to be solved by the camera module.
鉴于此,本申请提供一种摄像头模组,该摄像头模组可以获得较宽的动态范围。In view of this, the present application provides a camera module, which can obtain a wider dynamic range.
基于上述内容,下面结合附图4至附图11,对本申请提出的摄像头模组进行具体阐述。Based on the above content, the camera module proposed in the present application will be described in detail below with reference to FIG. 4 to FIG. 11 .
实施例一Example 1
如图4所示,为本申请提供的一种摄像头模组的结构示意图。该摄像头模组可包括第一光学镜头组件401、分光组件402、第一图像传感组件403和第二图像传感组件404,第一图像传感组件403和第二图像传感组件404的感光面积相同,第一图像传感组件403的分辨率大于第二图像传感组件404的分辨率;第一光学镜头组件401用于接收来自目标物体的光线;分光组件402用于对经由第一光学镜头组件401传播的光线进行分光,得到第一光线和第二光线,并向第一图像传感组件403传播第一光线,向第二图像传感组件404传播第二光线。As shown in FIG. 4 , it is a schematic structural diagram of a camera module provided by the present application. The camera module may include a first optical lens assembly 401 , a light splitting assembly 402 , a first image sensing assembly 403 and a second image sensing assembly 404 . With the same area, the resolution of the first image sensing assembly 403 is greater than that of the second image sensing assembly 404; the first optical lens assembly 401 is used to receive the light from the target object; the light splitting assembly 402 is used to The light transmitted by the lens assembly 401 is split to obtain the first light and the second light, and the first light is transmitted to the first image sensing element 403 and the second light is transmitted to the second image sensing element 404 .
基于该摄像头模组,由于图像传感组件的感光面积固定时,灵敏度和分辨率是此消彼长的,因此,第一图像传感组件的灵敏度小于第二图像传感组件的灵敏度。也就是说,第一图像传感组件具有较高的分辨率且对光线的灵敏度较低,对亮光不容易过曝,可以识别高亮度的光线,可用于高亮目标的成像;第二图像传感组件具有较低的分辨率且对光线的灵敏度较高,可识别低亮度的光,可用于暗弱目标的成像。如此,可使得该摄像头模组既可以对暗弱目标成像又可以对高亮目标成像,从而有助于提高摄像头模组的动态范围。Based on the camera module, when the photosensitive area of the image sensing assembly is fixed, the sensitivity and resolution are trade-offs. Therefore, the sensitivity of the first image sensing assembly is smaller than the sensitivity of the second image sensing assembly. That is to say, the first image sensing component has high resolution and low sensitivity to light, is not easily overexposed to bright light, can identify high-brightness light, and can be used for imaging of bright targets; The sensor component has lower resolution and higher sensitivity to light, can recognize low-brightness light, and can be used for imaging of dim targets. In this way, the camera module can image both dim targets and bright targets, thereby helping to improve the dynamic range of the camera module.
应理解,该摄像头模组相当于第二图像传感组件牺牲部分图像的分辨率来换取灵敏度的提升,以达到提升摄像头模组的动态范围。It should be understood that the camera module is equivalent to the second image sensing component sacrificing the resolution of a part of the image in exchange for an increase in sensitivity, so as to improve the dynamic range of the camera module.
在一种可能的实现方式中,经第一光学镜头组件传播的光线所携带的信息与射入第一光学镜头组件的光线(即来自目标物体的光线)所携带的信息相同。In a possible implementation manner, the information carried by the light propagating through the first optical lens assembly is the same as the information carried by the light entering the first optical lens assembly (ie, the light from the target object).
需要说明的是,目标物体包括但不限于单一的物体,例如,拍摄人时,目标物体包括人及人周围的景物,即人周围的景物也为目标物体的一部分。也可以理解为,第一光学镜头组件的视场角范围内的物体均可称为目标物体。It should be noted that the target object includes but is not limited to a single object. For example, when photographing a person, the target object includes the person and the scene around the person, that is, the scene around the person is also a part of the target object. It can also be understood that all objects within the field of view of the first optical lens assembly can be referred to as target objects.
下面对图4所示的各个功能组件和结构分别进行介绍说明,以给出示例性的具体实现方案。为方便说明,下文中的第一光学镜头组件、分光组件、第一图像传感组件和第二图像传感组件均未加标识。The respective functional components and structures shown in FIG. 4 are introduced and described below to give an exemplary specific implementation solution. For the convenience of description, the following first optical lens assembly, light splitting assembly, first image sensing assembly and second image sensing assembly are not marked.
一、第一光学镜头组件1. The first optical lens assembly
在一种可能的实现方式中,第一光学镜头组件可接收来自目标物体的光线,并通过改变来自目标物体的光线的传播方向,使来自目标物体的光线尽可能的射入摄像头模组。进 一步,可选地,第一光学镜头组件还会将眩光(例如路面反射的眩光)也传播进摄像头模组内。In a possible implementation manner, the first optical lens assembly can receive light from the target object, and by changing the propagation direction of the light from the target object, the light from the target object can enter the camera module as much as possible. Further, optionally, the first optical lens assembly also propagates glare (such as the glare reflected from the road surface) into the camera module.
在一种可能的实现方式中,第一光学镜头组件可由至少一枚光学镜片组成。作为示例,图5a给出了一种第一光学镜头组件的结构示意图。该第一光学镜头组件以包括七枚光学镜片为例。来自目标物体的光线可通过该第一光学镜头组件尽可能的传播至摄像头模组内,进一步,外界的眩光也可通过该第一光学镜头组件传播至摄像头模组内。In a possible implementation manner, the first optical lens assembly may be composed of at least one optical lens. As an example, FIG. 5a shows a schematic structural diagram of a first optical lens assembly. The first optical lens assembly includes seven optical lenses as an example. The light from the target object can be transmitted into the camera module through the first optical lens assembly as much as possible, and further, external glare can also be transmitted into the camera module through the first optical lens assembly.
作为又一个示例,图5b给出了另一种第一光学镜头组件的结构示意图。该第一光学镜头组件可包括六枚光学镜片。As yet another example, FIG. 5b shows a schematic structural diagram of another first optical lens assembly. The first optical lens assembly may include six optical lenses.
应理解,上述图5a或5b所示的第一光学镜头组件的结构仅是一个示例,本申请中的第一光学镜头组件可以具有比图5a更多或更少的光学镜片,也可以具有比图5b更多或更少的光学镜片。其中,光学镜片可以是凸透镜(如双凸透镜,平凸透镜或者凸凹透镜)或凹透镜(如双凹透镜,平凹透镜或者凹凸透镜)中的任一种、或者是凸透镜和凹透镜的组合,本申请对此不做限定。It should be understood that the structure of the first optical lens assembly shown in FIG. 5a or 5b above is only an example, and the first optical lens assembly in the present application may have more or less optical lenses than those shown in FIG. Figure 5b More or fewer optical lenses. Wherein, the optical lens can be any one of a convex lens (such as a biconvex lens, a plano-convex lens, or a convex-concave lens) or a concave lens (such as a biconcave lens, a plano-concave lens, or a meniscus lens), or a combination of a convex lens and a concave lens, which is not covered in this application. Do limit.
在一种可能的实现方式中,为了抑制温漂,第一光学镜头组件中至少有一个光学镜片的材料是玻璃。In a possible implementation manner, in order to suppress temperature drift, the material of at least one optical lens in the first optical lens assembly is glass.
进一步,可选地,为了尽量减小摄像头模组的高度,第一光学镜头组件中的光学镜片在摄像头模组的高度方向(可参见上述图5a)上可做切割。Further, optionally, in order to minimize the height of the camera module, the optical lenses in the first optical lens assembly can be cut in the height direction of the camera module (see FIG. 5a above).
二、分光组件2. Spectral components
在一种可能的实现方式中,分光组件可用于对第一光学镜头组件传播过来的光线进行分光(例如一分为二),得到第一光线和第二光线。示例性地,分光组件可对第一光学镜头组件传播过来的光线分强度(或称为分能量或称为分振幅),得到第一光线和第二光线。应理解,第一光线和第二光线携带的信息相同,第一光线携带的信息与第一光学镜头组件传播过来的光线携带的信息相同,第二光线携带的信息与第一光学镜头组件传播过来的光线携带的信息也相同。其中,第一光线的强度与第二光线的强度的总和等于或近似等于第一光学镜头组件传播过来的光线的强度。In a possible implementation manner, the light splitting component may be configured to perform light splitting (eg, split into two) light transmitted by the first optical lens component to obtain the first light ray and the second light ray. Exemplarily, the light splitting component may divide the intensity (or called partial energy or called partial amplitude) of the light transmitted from the first optical lens component to obtain the first light ray and the second light ray. It should be understood that the information carried by the first light and the second light is the same, the information carried by the first light is the same as the information carried by the light transmitted by the first optical lens assembly, and the information carried by the second light is carried by the first optical lens assembly. The information carried by the light is the same. Wherein, the sum of the intensity of the first light and the intensity of the second light is equal to or approximately equal to the intensity of the light transmitted by the first optical lens assembly.
进一步,可选地,第一光线和第二光线的强度的比值可小于1。例如,第一光线和第二光线的强度的比可以为2:8;再比如,第一光线和第二光线的强度的比为1:9;再比如,第一光线的强度和第二光线的强度的比为4:6。也可以理解为,第一图像传感组件接收到的第一光线的强度小于第二图像传感组件接收到的第二光线的强度。如此,高分辨率低灵敏度的第一图像传感组件接收到的第一光线强度小于低分辨率高灵敏度的第二图像传感组件接收到的第二光线的强度,如此,有助于进一步提高摄像头模组的动态范围。应理解,第一光线和第二光线的强度的比值也可以等于1,或者大于1。Further, optionally, the ratio of the intensities of the first light and the second light may be less than 1. For example, the ratio of the intensity of the first light and the second light may be 2:8; for another example, the ratio of the intensity of the first light to the second light is 1:9; for another example, the intensity of the first light and the second light The intensity ratio is 4:6. It can also be understood that the intensity of the first light received by the first image sensing assembly is smaller than the intensity of the second light received by the second image sensing assembly. In this way, the intensity of the first light received by the first image sensing component with high resolution and low sensitivity is smaller than the intensity of the second light received by the second image sensing component with low resolution and high sensitivity. The dynamic range of the camera module. It should be understood that the ratio of the intensities of the first light and the second light may also be equal to 1, or greater than 1.
需要说明的是,第一光线和第二光线的强度的比值可根据实际需求进行设计,本申请对此不做限定。It should be noted that the ratio of the intensities of the first light and the second light can be designed according to actual requirements, which is not limited in this application.
在一种可能的实现方式中,分光组件例如可以为分光棱镜(beam splitter,BS)或分光平板。分光棱镜是通过在棱镜的表面镀制一层或多层薄膜(即分光膜)形成的,分光平板是通过在玻璃平板的一个表面镀制一层或多层薄膜(即分光膜)形成的。分光棱镜和分光平板均是利用薄膜对射入的光线的透射率和反射率不同,以实现对第一光学镜头组件传播过来的光线进行分光。In a possible implementation manner, the light-splitting component may be, for example, a beam splitter (beam splitter, BS) or a light-splitting plate. The beam-splitting prism is formed by coating one or more thin films (ie, beam-splitting film) on the surface of the prism, and the beam-splitting plate is formed by coating one or more layers of thin films (ie, beam-splitting film) on one surface of the glass plate. The light splitting prism and the light splitting plate both use the films to have different transmittances and reflectances of the incident light, so as to realize the light splitting of the light transmitted by the first optical lens assembly.
如图6所示,为本申请提供的一种分光组件的分光原理示意图。该分光棱镜可将第一 光学镜头组件传播过来的光线一分为二,得到第一光线和第二光线。也可以理解为,第一光学镜头组件传播过来的光线经分光组件后,一部分被透射(第一光线)至第一图像传感组件,另一部分被反射(第二光线)至第二图像传感组件。示例性地,第一光线和第二光线的强度的比值可通过镀制的分光膜的反射率和透射率确定。As shown in FIG. 6 , a schematic diagram of a spectroscopic principle of a spectroscopic component provided by the present application. The beam splitting prism can divide the light transmitted by the first optical lens assembly into two to obtain the first light and the second light. It can also be understood that after the light propagating from the first optical lens assembly passes through the light splitting assembly, part of it is transmitted (the first light) to the first image sensor assembly, and the other part is reflected (the second light) to the second image sensor. components. Exemplarily, the ratio of the intensities of the first light and the second light may be determined by the reflectance and transmittance of the plated beam splitting film.
三、第一图像传感组件和第二图像传感组件3. The First Image Sensing Assembly and the Second Image Sensing Assembly
在一种可能的实现方式中,第一图像传感组件可对接收到的第一光线进行光电转换,得到第一图像的信息;第二图像传感组件可对第二光线进行光电转换,得到第二图像的信息。In a possible implementation manner, the first image sensing component may perform photoelectric conversion on the received first light to obtain information of the first image; the second image sensing component may perform photoelectric conversion on the second light to obtain information of the second image.
此处,第一图像传感组件和第二图像传感组件采用非全同架构。具体地,第一图像传感组件与第二图像传感组件的感光面积相同,第一图像传感组件的分辨率大于第二图像传感组件的分辨率。进一步,可选地,第一图像传感组件包括第一像素,第二图像传感组件包括第二像素,第一像素的尺寸小于第二像素的尺寸。应理解,像素的尺寸越大,图像传感组件的灵敏度越高;像素的尺寸越小,图像传感组件的灵敏度越低,因此,第一图像传感组件的对光线的灵敏度低于第二图像传感组件对光线的灵敏度。Here, the first image sensing assembly and the second image sensing assembly use non-identical architectures. Specifically, the photosensitive areas of the first image sensing assembly and the second image sensing assembly are the same, and the resolution of the first image sensing assembly is greater than the resolution of the second image sensing assembly. Further, optionally, the first image sensing component includes a first pixel, the second image sensing component includes a second pixel, and the size of the first pixel is smaller than the size of the second pixel. It should be understood that the larger the size of the pixel, the higher the sensitivity of the image sensing assembly; the smaller the size of the pixel, the lower the sensitivity of the image sensing assembly, therefore, the sensitivity to light of the first image sensing assembly is lower than that of the second image sensing assembly The sensitivity of the image sensing component to light.
如图7所示,为本申请提供的一种第一图像传感组件与第二图像传感组件的关系示意图。图7中的(a)表示第一图像传感组件,图7中的(b)表示第二图像传感组件,第一图像传感组件和第二图像传感组件的最小可重复单元均为RGGB,R表示用于接收红色(red)光线的像素,G表示用于接收绿色(green)光线的像素,B表示用于接收蓝色(blue)光线的像素。第一图像传感组件与第二图像传感组件的感光面积相同,第一图像传感组件包括的第一像素的尺寸是第二图像传感组件包括的第二像素的尺寸的2倍,因此,第一图像传感组件的分辨率是第二图像传感组件的分辨率的4倍。而且,第一图像传感组件的灵敏度是第二图像传感组件的灵敏度的1/4。也就是说,第一图像传感组件具有高分辨率小像素低灵敏度的特性,可用于高亮目标成像;第二图像传感组件具有低分辨率大像素高灵敏度的特性,可用于暗弱目标成像。As shown in FIG. 7 , a schematic diagram of the relationship between a first image sensing assembly and a second image sensing assembly provided by the present application. (a) in FIG. 7 represents the first image sensing assembly, and (b) in FIG. 7 represents the second image sensing assembly. The minimum repeatable units of the first image sensing assembly and the second image sensing assembly are both RGGB, R represents a pixel for receiving red (red) light, G represents a pixel for receiving green (green) light, and B represents a pixel for receiving blue (blue) light. The photosensitive areas of the first image sensing assembly and the second image sensing assembly are the same, and the size of the first pixel included in the first image sensing assembly is twice the size of the second pixel included in the second image sensing assembly. Therefore, , the resolution of the first image sensing assembly is 4 times that of the second image sensing assembly. Also, the sensitivity of the first image sensing assembly is 1/4 of the sensitivity of the second image sensing assembly. That is to say, the first image sensing component has the characteristics of high resolution, small pixels and low sensitivity, and can be used for imaging of bright targets; the second image sensing component has the characteristics of low resolution, large pixels and high sensitivity, and can be used for imaging of dim targets .
需要说明的是,图7所示的第一图像传感组件和第二图像传感组件的分辨率和最小可重复单元均是示例,并不对本申请构成限定。例如,第一图像传感组件和第二图像传感组件的最小可重复单元也可以是RYYB。It should be noted that the resolution and the minimum repeatable unit of the first image sensing assembly and the second image sensing assembly shown in FIG. 7 are examples, and do not limit the present application. For example, the smallest repeatable unit of the first image sensing assembly and the second image sensing assembly may also be RYYB.
结合上述图7,以第一图像传感组件的分辨率为8M、第二图像传感组件的分辨率为2M为例,以第一光线与第二光线的强度的比值为2:8为例。表1示例性地的示出了基于实施例一的摄像头模组的提升的动态范围与基于现有技术中摄像头模组的动态范围的关系。With reference to the above FIG. 7 , the resolution of the first image sensing component is 8M, the resolution of the second image sensing component is 2M as an example, and the ratio of the intensity of the first light to the second light is 2:8 as an example . Table 1 exemplarily shows the relationship between the improved dynamic range of the camera module based on the first embodiment and the dynamic range of the camera module based on the prior art.
表1基于实施例一的摄像头模组提升的动态范围与基于现有技术的摄像头模组的动态范围关系Table 1 The relationship between the dynamic range of the camera module based on the first embodiment and the dynamic range of the camera module based on the prior art
需要说明的是,第一光学镜头组件的视场范围内可能包括高亮目标,也可能包括暗弱目标,高亮目标和暗弱目标统称为目标物体。由上述表1可以看出,基于现有技术中的摄像头模组,图像传感器接收到来自高亮目标的入射光线的能量为Eh,来自暗弱目标的入射光线的能量为El,动态范围为-10log(Eh/El)。基于上述实施例一的摄像头模组,由于经分光组件分光后的第一光线与第二光线的强度的比值为2:8,因此,第一图像传感组件接收到来自高亮目标的入射光线的能量为0.2×Eh。如此,射入第一图像传感组件的第一光线的强度降低0.2倍,因此,第一图像传感组件对高亮目标的防过曝能力提升了2倍。第二图像传感组件接收到来自暗弱目标的入射光线的能量为0.8×El;而且,第二图像传感组件包括的第二像素的尺寸是第一图像传感组件包括的第一像素的尺寸的1/4倍,因此,第二图像传感组件的灵敏度是第一图像传感组件的灵敏度的4倍,如此,第二图像传感组件的对暗弱目标的低照能力提升了4×0.8=3.2倍。因此,摄像头模组的动态范围可提升-10log(Eh/El)=10log16。如此,可进一步说明基于实施例一的摄像头模组的动态范围得到了有效提升。应理解,第二图像传感组件接收到来自高亮目标的入射光线的能量为0.8×Eh,可能会过曝,第一图像传感组件接收到来自暗弱目标的入射光线的能量为0.2×El,可能会过暗,通过后面第一处理组件对图像信息的融合,可以实现高动态范围,具体可参见后续相关描述,此处不再重复赘述。It should be noted that, the field of view of the first optical lens assembly may include a bright target or a dim target, and the bright target and the dim target are collectively referred to as a target object. As can be seen from the above table 1, based on the camera module in the prior art, the energy of the incident light received by the image sensor from the bright target is Eh, the energy of the incident light from the dim target is E1, and the dynamic range is -10log. (Eh/El). Based on the camera module of the first embodiment, since the ratio of the intensity of the first light and the second light after the light splitting by the light splitting component is 2:8, the first image sensing component receives the incident light from the bright target The energy of 0.2 × Eh. In this way, the intensity of the first light entering the first image sensing assembly is reduced by 0.2 times, and therefore, the overexposure prevention capability of the first image sensing assembly for the bright target is improved by two times. The energy of the incident light received by the second image sensing assembly from the dim target is 0.8×El; and the size of the second pixel included in the second image sensing assembly is the size of the first pixel included in the first image sensing assembly Therefore, the sensitivity of the second image sensing assembly is 4 times that of the first image sensing assembly, so the low illumination capability of the second image sensing assembly for dim targets is improved by 4×0.8 = 3.2 times. Therefore, the dynamic range of the camera module can be improved by -10log(Eh/El)=10log16. In this way, it can be further demonstrated that the dynamic range of the camera module based on the first embodiment has been effectively improved. It should be understood that the energy of the incident light received by the second image sensing component from the bright target is 0.8×Eh, which may be overexposed, and the energy of the incident light received by the first image sensing component from the dim target is 0.2×El , may be too dark, and a high dynamic range can be achieved through the fusion of image information by the first processing component later. For details, please refer to the subsequent related descriptions, which will not be repeated here.
在一种可能的实现方式中,第一图像传感组件可以为互补金属氧化物半导体元件(complementary metal-oxide semiconductor,CMOS)光电晶体管、感光耦合器件(charge-coupled device,CCD)、光电探测器(photon detector,PD)、或高速光电二极管。其中,CMOS光电晶体管是基于互补金属氧化物半导体工艺、将光信号转换成电信号的设备或芯片。第二图像传感组件也可以为互补金属氧化物半导体元件(complementary metal-oxide semiconductor,CMOS)光电晶体管、感光耦合器件(charge-coupled device,CCD)、光电探测器(photon detector,PD)、或高速光电二极管。In a possible implementation manner, the first image sensing component may be a complementary metal-oxide semiconductor (CMOS) phototransistor, a charge-coupled device (CCD), and a photodetector (photon detector, PD), or high-speed photodiode. Among them, a CMOS phototransistor is a device or chip that converts optical signals into electrical signals based on complementary metal oxide semiconductor technology. The second image sensing component may also be a complementary metal-oxide semiconductor (CMOS) phototransistor, a charge-coupled device (CCD), a photon detector (PD), or High-speed photodiode.
需要说明的是,第一图像传感组件的种类可与第二图像传感组件的种类相同,例如,第一图像传感组件和第二图像传感组件可均为CMOS光电晶体管。或者第一图像传感组件的种类可与第二图像传感组件的种类不同,例如,第一图像传感组件为CMOS光电晶体管,第二图像传感组件为CCD。It should be noted that the type of the first image sensing element may be the same as the type of the second image sensing element, for example, the first image sensing element and the second image sensing element may both be CMOS phototransistors. Alternatively, the type of the first image sensing element may be different from that of the second image sensing element, for example, the first image sensing element is a CMOS phototransistor, and the second image sensing element is a CCD.
在一种可能的实现方式中,第一图像传感组件可以为第一图像传感器,第二图像传感组件可以为第二图像传感器。示例性地,第一图像传感器的分辨率范围可为[800万像素,4800万像素],第二图像传感器的分辨率范围可为[800万像素,4800万像素],需要满足第一图像传感器的分辨率大于第二图像传感器的分辨率。示例性地,第一图像传感器的分辨率为1200万像素、2000万像素、或4800万像素,第二图像传感器的分辨率可为800万像素。当然,第一图像传感器的分辨率也可以大于4800万像素,例如,还可以是5200万像素、6000万像素、7200万像素等;第二图像传感器的分辨率也可以大于800万像素。应理解,凡是可以满足第一图像传感组件的分辨率大于第二图像传感组件的分辨率的组合均可以。In a possible implementation manner, the first image sensing component may be a first image sensor, and the second image sensing component may be a second image sensor. Exemplarily, the resolution range of the first image sensor may be [8 million pixels, 48 million pixels], and the resolution range of the second image sensor may be [8 million pixels, 48 million pixels]. The resolution is greater than the resolution of the second image sensor. Exemplarily, the resolution of the first image sensor may be 12 million pixels, 20 million pixels, or 48 million pixels, and the resolution of the second image sensor may be 8 million pixels. Of course, the resolution of the first image sensor may also be greater than 48 million pixels, for example, 52 million pixels, 60 million pixels, 72 million pixels, etc.; the resolution of the second image sensor may also be greater than 8 million pixels. It should be understood that any combination that can satisfy that the resolution of the first image sensing assembly is greater than that of the second image sensing assembly is acceptable.
该实施例一中的摄像头模组还可包括第一偏振片。进一步,可选地,摄像头模组还可包括第一处理组件。下面分别对第一偏振片和第一处理组件进行介绍。The camera module in the first embodiment may further include a first polarizer. Further, optionally, the camera module may further include a first processing component. The first polarizer and the first processing component are respectively introduced below.
四、第一偏振片Fourth, the first polarizer
在一种可能的实现方式中,第一偏振片可位于分光组件与第一图像传感组件之间(可 参见下述图9)。第一偏振片用于允许第一光线中偏振态与第一偏振片的偏振方向平行的光线通过,通过第一偏振片的第一光线汇聚于第一图像传感组件。如此,可进一步降低射入第一图像传感组件的第一光线的强度,从而可进一步降低第一图像传感组件对亮光的过曝,进而有助于进一步提高摄像头模组的动态范围。需要说明的是,通过第一偏振片的第一光线为射入第一偏振片的第一光线中的一部分(即第一光线中平行于第一偏振片的偏振方向的部分)。In a possible implementation, the first polarizer may be located between the light splitting component and the first image sensing component (see Figure 9 below). The first polarizer is used for allowing the light of the first light whose polarization state is parallel to the polarization direction of the first polarizer to pass through, and the first light passing through the first polarizer is converged on the first image sensing component. In this way, the intensity of the first light entering the first image sensing element can be further reduced, thereby further reducing the overexposure of the first image sensing element to bright light, thereby further improving the dynamic range of the camera module. It should be noted that the first light rays passing through the first polarizer are part of the first rays entering the first polarizer (ie, the part of the first rays parallel to the polarization direction of the first polarizer).
进一步,可选地,第一偏振片的偏振方向垂直于经由第一光学镜头组件传播的眩光的主偏振方向。通过在高分辨率的第一图像传感组件的光路引入第一偏振片,并设计第一偏振片的偏振方向垂直于接收到的眩光的主偏振方向,可以消除或减弱进入高分辨率的第一图像传感组件的眩光(具体原理可结合眩光的偏振态及前述图2的描述),从而可获得消眩光且高动态范围的图像。Further, optionally, the polarization direction of the first polarizer is perpendicular to the main polarization direction of the glare propagating through the first optical lens assembly. By introducing a first polarizer in the optical path of the high-resolution first image sensing component, and designing the polarization direction of the first polarizer to be perpendicular to the main polarization direction of the received glare, it is possible to eliminate or weaken the first polarizer entering the high-resolution first image sensor. The glare of an image sensing component (the specific principle can be combined with the polarization state of the glare and the description of FIG. 2 above), so that a glare-free and high dynamic range image can be obtained.
结合上述图7,以第一图像传感组件的分辨率为8M、第二图像传感组件的分辨率为2M为例,以第一光线与第二光线的强度的比值为2:8为例。当摄像头模组还包括第一偏振片时,基于实施例一的摄像头模组的动态范围与基于现有技术中摄像头模组的动态范围的关系可参见下述表2。With reference to the above FIG. 7 , the resolution of the first image sensing component is 8M, the resolution of the second image sensing component is 2M as an example, and the ratio of the intensity of the first light to the second light is 2:8 as an example . When the camera module further includes a first polarizer, the relationship between the dynamic range of the camera module based on Embodiment 1 and the dynamic range of the camera module based on the prior art can be seen in Table 2 below.
表2基于实施例一的摄像头模组提升的动态范围与基于现有技术的摄像头模组的动态范围Table 2 The improved dynamic range of the camera module based on the first embodiment and the dynamic range of the camera module based on the prior art
当摄像头模组还包括第一偏振片时,结合上述表2,类似于上述表1的分析,第一图像传感组件对高亮目标的防过曝能力提升了10倍,第二图像传感组件的对暗弱目标的低照能力提升了3.2倍,摄像头模组的动态范围为10log32。When the camera module further includes a first polarizer, in combination with the above Table 2, similar to the analysis in Table 1 above, the overexposure prevention capability of the first image sensing component for high-brightness targets is improved by 10 times, and the second image sensor The low-light capability of the component for dim targets is increased by 3.2 times, and the dynamic range of the camera module is 10log32.
在一种可能的实现方式中,提升摄像头模组的动态范围与第一光线和第二光线的强度的比值相关。当摄像头模组还包括第一偏振片时,第一光线与第二光线的强度的比值η与动态范围的提升效果的关系请参阅表3。In a possible implementation manner, improving the dynamic range of the camera module is related to the ratio of the intensities of the first light and the second light. When the camera module further includes the first polarizer, please refer to Table 3 for the relationship between the intensity ratio η of the first light and the second light and the effect of improving the dynamic range.
表3第一光线与第二光线的强度的比值η与动态范围的提升效果的关系Table 3 The relationship between the ratio η of the intensity of the first light and the second light and the improvement effect of the dynamic range
ηn | 1:91:9 | 2:82:8 | 3:73:7 | 4:64:6 | 5:55:5 |
动态范围的提升(倍)Dynamic range improvement (times) | 20×3.6=7220×3.6=72 | 10×3.2=3210×3.2=32 | 6.7×2.8=18.76.7×2.8=18.7 | 5×2.4=125×2.4=12 | 4×2=84×2=8 |
动态范围的提升(dB)Dynamic range improvement (dB) | 18.518.5 | 1515 | 12.712.7 | 10.710.7 | 99 |
应理解,动态范围的提升(dB)与动态范围的提升(倍)之间的关系满足:动态范围的提升(dB)=-10log(动态范围的提升(倍))。It should be understood that the relationship between the improvement (dB) of the dynamic range and the improvement (times) of the dynamic range satisfies: the improvement of the dynamic range (dB)=-10log (the improvement (times) of the dynamic range).
当摄像头模组中包括第一偏振片时,第一偏振片不仅可以消除或减弱眩光,还可以使得第一图像传感组件接收到的第一光线变柔和。When the camera module includes the first polarizer, the first polarizer can not only eliminate or reduce glare, but also soften the first light received by the first image sensing component.
五、第一处理组件5. The first processing component
在一种可能的实现方式中,第一处理组件可接收来自第一图像传感组件的第一图像的信息、以及接收来自第二图像传感组件的第二图像的信息,根据第一图像的信息和第二图像的信息,生成目标物体的图像。具体地,第一处理组件可对第二图像的信息进行上采样(具体可参见前述相关描述),得到第三图像的信息,第三图像的信息的对应的第三图像的分辨率与第一图像的信息对应的第一图像的分辨率相同;融合第一图像的信息与第三图像的信息,得到目标物体的图像,可参阅图8。通过对第一图像的信息和第二图像的信息融合,得到的目标物体的图像具有较高的质量。In a possible implementation manner, the first processing component may receive information of the first image from the first image sensing component and receive information of the second image from the second image sensing component, according to the information of the first image information and information of the second image to generate an image of the target object. Specifically, the first processing component may perform up-sampling on the information of the second image (for details, please refer to the foregoing related description) to obtain the information of the third image, and the information of the third image corresponding to the resolution of the third image is the same as that of the first image. The resolution of the first image corresponding to the information of the image is the same; the information of the first image and the information of the third image are fused to obtain the image of the target object, as shown in FIG. 8 . By fusing the information of the first image and the information of the second image, the obtained image of the target object has higher quality.
进一步,可选地,第一处理组件还可对融合得到的目标的图像进行去噪、增强、分割虚化等处理,以丰富用户体验。Further, optionally, the first processing component may further perform processing such as denoising, enhancement, segmentation and blurring on the image of the target obtained by fusion, so as to enrich the user experience.
在一种可能得到实现方式中,第一处理组件例如可以是应用处理器(application processor,AP)、图形处理器(graphics processing unit,GPU)、图像信号处理器(image signal processor,ISP)、数字信号处理器(digital signal processor,DSP)等。In a possible implementation manner, the first processing component may be, for example, an application processor (application processor, AP), a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a digital Signal processor (digital signal processor, DSP) and so on.
或者,第一处理组件可以是中央处理单元(central processing unit,CPU),还可以是其它通用处理器、专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field programmable gate array,FPGA)或者其它可编程逻辑器件、晶体管逻辑器件,硬件部件或者其任意组合。通用处理器可以是微处理器,也可以是任何常规的处理器。Alternatively, the first processing component may be a central processing unit (CPU), other general-purpose processors, application specific integrated circuits (ASICs), field programmable gate arrays (field programmable gate arrays, FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof. A general-purpose processor may be a microprocessor or any conventional processor.
或者,第一处理组件可以是CPU、ASIC、FPGA、其它可编程逻辑器件、晶体管逻辑器件中的任一个与应用处理器、图形处理器、图像信号处理器、数字信号处理器中任一个的组合。Alternatively, the first processing component may be a combination of any of a CPU, ASIC, FPGA, other programmable logic device, transistor logic device and any of an application processor, graphics processor, image signal processor, digital signal processor .
需要说明的是,上述实施例一种的摄像头模组还可以包括红外线(infrared radiation,IR)滤光片,该IR滤光片可位于分光组件与第一图像传感组件之间,和/或,分光组件与第二图像传感组件之间。IR滤光片可用于对红外线进行阻挡或者吸收,以防止损坏图像传感组件;而且,IR滤光片可被配置为对第一光学镜头组件的焦距没有影响。示例性地,IR滤光片的材料可为玻璃或者类玻璃的树脂,如蓝玻璃(blue glass)。It should be noted that, the camera module of one of the above embodiments may further include an infrared (infrared radiation, IR) filter, and the IR filter may be located between the light splitting component and the first image sensing component, and/or , between the light splitting component and the second image sensing component. The IR filter can be used to block or absorb infrared light to prevent damage to the image sensing assembly; furthermore, the IR filter can be configured to have no effect on the focal length of the first optical lens assembly. Illustratively, the material of the IR filter may be glass or a glass-like resin, such as blue glass.
基于上述内容,下面结合具体的硬件结构,给出上述实施例一中的摄像头模组的一种具体实现方式。以便于进一步理解上述摄像头模组的结构及目标物体的图像的形成过程。Based on the above content, a specific implementation manner of the camera module in the above-mentioned first embodiment is given below with reference to the specific hardware structure. In order to further understand the structure of the above-mentioned camera module and the formation process of the image of the target object.
如图9所示,为本申请提供的另一种摄像头模组的结构示意图。该摄像头模组可包括第一光学镜头组件、分光组件、第一图像传感组件、第二图像传感组件、位于分光组件与第一图像传感组件之间的第一偏振片以及第一处理组件。其中,第一光学镜头组件以上述图5a为例、分光组件以分光棱镜为例,第一图像传感组件以上述图7中的(a)为例,第二图像传感组件以上述图7中的(b)为例。As shown in FIG. 9 , it is a schematic structural diagram of another camera module provided by the present application. The camera module may include a first optical lens assembly, a light splitting assembly, a first image sensing assembly, a second image sensing assembly, a first polarizer located between the light splitting assembly and the first image sensing assembly, and a first process components. 5a as an example for the first optical lens assembly, a prism as an example for the spectroscopic component, the (a) in FIG. 7 for the first image sensing component as an example, and the second image sensing component for the Take (b) as an example.
关于第一光学镜头组件、分光组件、第一图像传感组件、第二图像传感组件、第一偏振片以及第一处理组件的可能的实现方式可分别参见前述相关描述,此处不再重复赘述。For possible implementations of the first optical lens assembly, the light splitting assembly, the first image sensing assembly, the second image sensing assembly, the first polarizer, and the first processing assembly, reference may be made to the foregoing related descriptions, which will not be repeated here. Repeat.
基于图9所示的摄像头模组,来自目标物体的光线通过第一光学镜头组件,被传播至分光组件,经分光组件分为第一光线和第二光线,第一光线传播至第一图像传感组件,第二光线传播至第二图像传感组件,第一图像传感组件将第一光线转换为第一电信号(即第一图像的信息),第二图像传感组件将第二光线转换为第二电信号(即第二图像的信息), 第一电信号可再经模拟数字转换器(analog to digital converter,A/D)后变为第一数字图像信号,第二电信号可再经A/D后变为第二数字图像信号,第一数字图像信号和第二数字图像信号均传输至ISP中进行加工处理(例如融合),得到特定格式的目标物体的图像。进一步,可选地,可由ISP将目标物体的图像传输到显示屏上进行显示。Based on the camera module shown in FIG. 9 , the light from the target object passes through the first optical lens assembly and is transmitted to the light splitting assembly, and is divided into a first light ray and a second light ray by the light splitting assembly, and the first light ray propagates to the first image sensor. The second light is transmitted to the second image sensing assembly, the first image sensing assembly converts the first light into a first electrical signal (that is, the information of the first image), and the second image sensing assembly converts the second light Converted into a second electrical signal (that is, the information of the second image), the first electrical signal can be converted into a first digital image signal through an analog to digital converter (A/D), and the second electrical signal can be converted into a first digital image signal. After A/D, it becomes a second digital image signal. Both the first digital image signal and the second digital image signal are transmitted to the ISP for processing (eg fusion) to obtain an image of the target object in a specific format. Further, optionally, the ISP can transmit the image of the target object to the display screen for display.
实施例二 Embodiment 2
如图10所示,为本申请提供的又一种摄像头模组的结构示意图。该摄像头模组可包括第二光学镜头组件1001、第三光学镜头组件1002、第一图像传感组件1003和第二图像传感组件1004,第一图像传感组件1003和第二图像传感组件1004的感光面积相同,第一图像传感组件1003的分辨率大于第二图像传感组件1004的分辨率;第二光学镜头组件1001用于接收来自目标物体的光线,并向第一图像传感组件1003传播第三光线;第三光学镜头组件1002用于接收来自目标物体的光线,并向第二图像传感组件1004传播第四光线。As shown in FIG. 10 , it is a schematic structural diagram of another camera module provided by the present application. The camera module may include a second optical lens assembly 1001, a third optical lens assembly 1002, a first image sensing assembly 1003 and a second image sensing assembly 1004, and the first image sensing assembly 1003 and the second image sensing assembly The photosensitive area of 1004 is the same, and the resolution of the first image sensing assembly 1003 is greater than that of the second image sensing assembly 1004; the second optical lens assembly 1001 is used to receive the light from the target object, and to the first image sensing The component 1003 transmits the third light; the third optical lens component 1002 is used to receive the light from the target object and transmit the fourth light to the second image sensing component 1004 .
基于上述摄像头模组,通过第二光学镜头组件与第三光学镜头组件,摄像头模组可以实现双通道成像,且第二光学镜头组件可将来自目标物体的第三光线传播至第一图像传感组件,第三光学镜头组件可将来自目标物体的第四光线传播至第二图像传感组件;由于图像传感组件的感光面积固定时,灵敏度和分辨率是此消彼长的,因此,第一图像传感组件的灵敏度小于第二图像传感组件的灵敏度。也就是说,第一图像传感组件具有较高的分辨率且对光线的灵敏度较低,对亮光不容易过曝,可以识别高亮度的光线,可用于高亮目标的成像;第二图像传感组件具有较低的分辨率且对光线的灵敏度较高,可识别低亮度的光,可用于暗弱目标的成像。如此,可使得该摄像头模组既可以对暗弱目标成像又可以对高亮目标成像,从而有助于提高摄像头模组的动态范围。Based on the above camera module, through the second optical lens assembly and the third optical lens assembly, the camera module can realize dual-channel imaging, and the second optical lens assembly can transmit the third light from the target object to the first image sensor component, the third optical lens component can transmit the fourth light from the target object to the second image sensing component; since the sensitivity and resolution of the image sensing component are fixed when the photosensitive area is fixed, therefore, the first The sensitivity of one image sensing element is less than the sensitivity of the second image sensing element. That is to say, the first image sensing component has high resolution and low sensitivity to light, is not easily overexposed to bright light, can identify high-brightness light, and can be used for imaging of bright targets; The sensor component has lower resolution and higher sensitivity to light, can recognize low-brightness light, and can be used for imaging of dim targets. In this way, the camera module can image both dim targets and bright targets, thereby helping to improve the dynamic range of the camera module.
此处,第三光线携带的信息与来自目标物体的光线携带的信息相同,第四光线携带的信息与来自目标物体的光线携带的信息也相同。示例性地,第三光线为来自目标物体的光线的一部分或全部,第四光线也为来自目标物体的光线的一部分或全部。Here, the information carried by the third light ray is the same as the information carried by the light ray from the target object, and the information carried by the fourth light ray is also the same as the information carried by the light ray from the target object. Exemplarily, the third light ray is a part or all of the light ray from the target object, and the fourth light ray is also a part or all of the light ray from the target object.
需要说明的是,该实施例二中第一光学镜头组件与第二光学镜头组件距离较近,因此,第二光镜头组件的视场范围与第三光学镜头组件的视场范围相同,或者视差是可以忽略。换言之,第二光镜头组件接收到的来自目标物体的光线与第三光学镜头组件接收到的来自目标物体的光线是相同的,或者差异是可以忽略的。It should be noted that in the second embodiment, the distance between the first optical lens assembly and the second optical lens assembly is relatively close, so the field of view of the second optical lens assembly is the same as the field of view of the third optical lens assembly, or the parallax can be ignored. In other words, the light from the target object received by the second optical lens assembly is the same as the light from the target object received by the third optical lens assembly, or the difference is negligible.
下面对图10所示的各个功能组件和结构分别进行介绍说明,以给出示例性的具体实现方案。为方便说明,下文中的第二光学镜头组件、第三光学镜头组件、第一图像传感组件和第二图像传感组件均未加标识。The respective functional components and structures shown in FIG. 10 are introduced and described below to give an exemplary specific implementation solution. For the convenience of description, the second optical lens assembly, the third optical lens assembly, the first image sensing assembly and the second image sensing assembly in the following description are not marked.
六、第二光学镜头组件和第三光学镜头组件6. The second optical lens assembly and the third optical lens assembly
在一种可能的实现方式中,第二光学镜头组件可接收来自目标物体的光线,并通过改变来自目标物体的光线的传播方向,向第一图像传感组件传播第三光线;第三光学镜头组件可接收来自目标物体的光线,并通过改变来自目标物体的光线的传播方向,向第二图像传感组件传播第四光线。In a possible implementation manner, the second optical lens assembly can receive the light from the target object, and by changing the propagation direction of the light from the target object, transmits the third light to the first image sensing assembly; the third optical lens The component can receive the light from the target object, and propagate the fourth light to the second image sensing component by changing the propagation direction of the light from the target object.
进一步,可选地,第二光学镜头组件和第三光学镜头组件均可能会将眩光(例如路面反射的眩光)也传播进摄像头模组内。Further, optionally, both the second optical lens assembly and the third optical lens assembly may transmit glare (eg, glare reflected from the road surface) into the camera module.
在一种可能的实现方式中,第二光学镜头组件的光圈数(即F数)大于第三光学镜头 组件的光圈数。也可以理解为,来自目标物体的光线经由第二光学镜头组件传播后的第三光线的强度小于经由第三光学镜头组件传播的第四光线的强度。换言之,第三光线的强度小于第四光线的强度。应理解,光圈数小,通光量大,有利于暗弱光线成像;光圈数大,通光量小,有利于高亮光线成像。In a possible implementation manner, the f-number (i.e., the F-number) of the second optical lens assembly is greater than the f-number of the third optical lens assembly. It can also be understood that the intensity of the third light ray transmitted from the target object through the second optical lens assembly is smaller than the intensity of the fourth light ray transmitted through the third optical lens assembly. In other words, the intensity of the third light ray is smaller than that of the fourth light ray. It should be understood that a small aperture number and a large amount of light transmission are conducive to imaging of dim and weak light; a large number of apertures and a small amount of light transmission are conducive to imaging of bright light.
在一种可能的实现方式中,可通过第二光学镜头组件的光圈数与第三光学镜头组件的光圈数,以改变第三光线与第四光线的强度的比值。例如,第三光线和第四光线的强度的比可以为2:8;再比如,第三光线和第四光线的强度的比为1:9;再比如,第三光线和第四光线的强度的比为4:6。也可以理解为,第一图像传感组件接收到的第三光线的强度小于第二图像传感组件接收到的第四光线的强度。In a possible implementation manner, the ratio of the intensities of the third light and the fourth light may be changed through the aperture number of the second optical lens assembly and the aperture number of the third optical lens assembly. For example, the ratio of the intensities of the third light and the fourth light may be 2:8; for another example, the ratio of the intensities of the third light to the fourth light is 1:9; for another example, the intensities of the third light and the fourth light The ratio is 4:6. It can also be understood that the intensity of the third light received by the first image sensing assembly is smaller than the intensity of the fourth light received by the second image sensing assembly.
在一种可能的实现方式中,第二光学镜头组件与第三光学镜头组件可均为固定光圈的光学镜头组件,其中,第二光学镜头组件为小光圈光学镜头组件,第三光学镜头组件为大光圈光学镜头组件。In a possible implementation manner, the second optical lens assembly and the third optical lens assembly may both be optical lens assemblies with a fixed aperture, wherein the second optical lens assembly is a small aperture optical lens assembly, and the third optical lens assembly is Large aperture optical lens assembly.
在另一种可能的实现方式中,第二光学镜头组件与第三光学镜头组件均为可调光圈的光学镜头组件,可调节第二光学镜头组件的光圈数大于第三光学镜头组件的光圈数。In another possible implementation manner, both the second optical lens assembly and the third optical lens assembly are optical lens assemblies with adjustable aperture, and the adjustable aperture number of the second optical lens assembly is greater than the aperture number of the third optical lens assembly .
通过第二光学镜头组件与第三光学镜头组件,摄像头模组可以实现双通道成像,不同通道的通光量可通过对应的光学镜头组件的光圈控制。光圈数大的第二光学镜头组件配合高分辨率的第一图像传感组件,光圈数小的第三光学镜头组件配合低分辨率的第二图像传感组件,从而有助于提升摄像头模组的动态范围。Through the second optical lens assembly and the third optical lens assembly, the camera module can realize dual-channel imaging, and the amount of light passing through different channels can be controlled by the aperture of the corresponding optical lens assembly. The second optical lens assembly with a large aperture number cooperates with the high-resolution first image sensing assembly, and the third optical lens assembly with a small aperture number cooperates with the low-resolution second image sensing assembly, thereby helping to improve the camera module dynamic range.
进一步,可选地,第二光学镜头组件与第三光学镜头组件的焦距相同。如此,可使得通过第二光学镜头组件的第三光线与通过第三光学镜头组件的第四光线汇聚于相同的平面,从而可使得第一图像传感组件与第二图像传感组件设置于相同的基板上(请参阅下述图11),进而有助于简化摄像头模组的装配。基于此,实施例二的摄像头模组可应用于定焦成像场景中。Further, optionally, the second optical lens assembly and the third optical lens assembly have the same focal length. In this way, the third light passing through the second optical lens assembly and the fourth light passing through the third optical lens assembly can be converged on the same plane, so that the first image sensing assembly and the second image sensing assembly can be arranged in the same plane on the substrate (see Figure 11 below), which helps simplify the assembly of the camera module. Based on this, the camera module of the second embodiment can be applied to a fixed-focus imaging scene.
在一种可能的实现方式中,第二光学镜头组件与第三光学镜头组件的像面圆相同,从而可使得第一图像传感组件和第二图像传感组件用于成像的有效区域是一致的。In a possible implementation manner, the image plane circles of the second optical lens assembly and the third optical lens assembly are the same, so that the effective areas used by the first image sensing assembly and the second image sensing assembly for imaging are consistent of.
需要说明的是,第二光学镜头组件和第三光学镜头组件的结构可参见前述第一光学镜头组件的描述,此处不再重复赘述。另外,第二光学镜头组件与第三光学镜头组件的结构可以相同,也可以不同,本申请对此不做限定。It should be noted that, for the structures of the second optical lens assembly and the third optical lens assembly, reference may be made to the description of the aforementioned first optical lens assembly, which will not be repeated here. In addition, the structures of the second optical lens assembly and the third optical lens assembly may be the same or different, which are not limited in this application.
进一步,可选地,为了尽量减小摄像头模组的高度,第二光学镜头组件和/或第三光学镜头组件中的光学镜片在摄像头模组的高度方向上可做切割。Further, optionally, in order to minimize the height of the camera module, the optical lenses in the second optical lens assembly and/or the third optical lens assembly may be cut in the height direction of the camera module.
七、第一图像传感组件和第二图像传感组件7. The first image sensing assembly and the second image sensing assembly
在一种可能的实现方式中,第一图像传感组件用于对接收到的第三光线进行光电转换,得到第四图像的信息;第二图像传感组件用于对接收到的第四光线进行光电转换,得到第五图像的信息;其中,第四图像的信息和第五图像的信息用于形成目标物体的图像。In a possible implementation manner, the first image sensing component is used to perform photoelectric conversion on the received third light to obtain information of the fourth image; the second image sensing component is used for the received fourth light Photoelectric conversion is performed to obtain information of the fifth image; wherein, the information of the fourth image and the information of the fifth image are used to form an image of the target object.
关于第一图像传感组件与第二图像传感组件的介绍可参见前述相关描述,此处不再重复赘述。具体地,可将第一光线用第三光线替换,将第二光线用第四光线替换,第一图像的信息用第四图像的信息替换,第二图像的信息用第五图像的信息替换,第三图像的信息用第六图像的信息替换等。For the introduction of the first image sensing assembly and the second image sensing assembly, reference may be made to the foregoing related descriptions, which will not be repeated here. Specifically, the first ray can be replaced by the third ray, the second ray can be replaced by the fourth ray, the information of the first image can be replaced by the information of the fourth image, the information of the second image can be replaced by the information of the fifth image, The information of the third image is replaced with the information of the sixth image, etc.
该实施例二中的摄像头模组还可包括第二偏振片。进一步,可选地,摄像头模组还可 包括第二处理组件。下面分别对第二偏振片和第二处理组件进行介绍。The camera module in the second embodiment may further include a second polarizer. Further, optionally, the camera module may further include a second processing component. The second polarizer and the second processing component are respectively introduced below.
八、第二偏振片Eight, the second polarizer
第二偏振片位于第二光学镜头组件与第一图像传感组件之间(可参见下述图11)。第二偏振片用于允许第三光线中与第二偏振片的偏振方向平行的光线通过,通过第二偏振片的第三光线汇聚于第一图像传感组件。也可以理解为,第二偏振片允许第三光线中与第二偏振片的偏振方向平行部分的光线通过,其余部分被滤除。如此,可进一步降低射入第一图像传感组件的第三光线的强度,从而可进一步降低第一图像传感组件对亮光的过曝,进而有助于进一步提高摄像头模组的动态范围。需要说明的是,通过第二偏振片的第三光线为射入第二偏振片的第三光线中的一部分(即第三光线中平行于第二偏振片的偏振方向的部分)。The second polarizer is located between the second optical lens assembly and the first image sensing assembly (see FIG. 11 below). The second polarizing plate is used for allowing the light parallel to the polarization direction of the second polarizing plate to pass through, and the third light passing through the second polarizing plate is converged on the first image sensing component. It can also be understood that the second polarizer allows the third light rays that are parallel to the polarization direction of the second polarizer to pass through, and the rest are filtered out. In this way, the intensity of the third light entering the first image sensing element can be further reduced, thereby further reducing the overexposure of the first image sensing element to bright light, thereby further improving the dynamic range of the camera module. It should be noted that the third light rays passing through the second polarizer are a part of the third rays entering the second polarizer (ie, the part of the third rays parallel to the polarization direction of the second polarizer).
进一步,可选地,第二偏振片的偏振方向垂直于经由第二光学镜头组件传播过来的眩光的主偏振方向,通过在高分辨率的第一图像传感组件的光路引入第二偏振片,并设计第二偏振片的偏振方向垂直于接收到的眩光的主偏振方向,可以消除或减弱进入高分辨率的第一图像传感组件的眩光(具体原理可结合眩光的偏振态及前述图2的描述),从而可获得消眩光且高动态范围的图像。Further, optionally, the polarization direction of the second polarizer is perpendicular to the main polarization direction of the glare transmitted through the second optical lens assembly, and the second polarizer is introduced into the optical path of the high-resolution first image sensing assembly, And the polarization direction of the second polarizer is designed to be perpendicular to the main polarization direction of the received glare, which can eliminate or reduce the glare entering the high-resolution first image sensing component (the specific principle can be combined with the polarization state of the glare and the aforementioned Figure 2). description), so that a glare-free and high dynamic range image can be obtained.
结合上述图7,以第一图像传感组件的分辨率为8M、第二图像传感组件的分辨率为2M为例,以第一光线与第二光线的强度的比值为2:8为例。表4示例性地的示出了基于实施例二的摄像头模组的动态范围与基于现有技术中摄像头模组的动态范围的关系。With reference to the above FIG. 7 , the resolution of the first image sensing component is 8M, the resolution of the second image sensing component is 2M as an example, and the ratio of the intensity of the first light to the second light is 2:8 as an example . Table 4 exemplarily shows the relationship between the dynamic range of the camera module based on the second embodiment and the dynamic range of the camera module based on the prior art.
表4基于实施例二的摄像头模组与基于现有技术的摄像头模组的动态范围关系Table 4 is based on the dynamic range relationship between the camera module of the second embodiment and the camera module based on the prior art
当摄像头模组还包括第二偏振片时,结合上述表4,类似于上述表1的分析,第一图像传感组件对高亮目标的防过曝能力提升了2倍,第二图像传感组件的对暗弱目标的低照能力提升了4倍,摄像头模组的动态范围为10log8。When the camera module further includes a second polarizer, in combination with the above Table 4, similar to the analysis in Table 1 above, the overexposure prevention capability of the first image sensing component for high-brightness targets is improved by 2 times, and the second image sensor The low-light capability of the component for dim targets is increased by 4 times, and the dynamic range of the camera module is 10log8.
进一步,可选地,第三光线与第四光线的强度的比值与动态范围的提升效果关系可参见上述表3,此处不再重复赘述。Further, optionally, the relationship between the ratio of the intensities of the third light rays and the fourth light rays and the improvement effect of the dynamic range can be found in Table 3 above, which will not be repeated here.
九、第二处理组件Nine, the second processing component
在一种可能的实现方式中,第二处理组件可接收来自第一图像传感组件的第四图像的信息、以及接收来自第二图像传感组件的第五图像的信息,并根据第四图像的信息和第五图像的信息,得到目标物体的图像。具体地,第二处理组件用于对第五图像的信息进行上采样,得到第六图像的信息,第六图像的信息对应的第六图像的分辨率与第四图像的信息对应的第四图像的分辨率相同;融合第四图像的信息和第六图像的信息,得到目标物体的图像。In a possible implementation manner, the second processing component may receive information of the fourth image from the first image sensing component, and receive information of the fifth image from the second image sensing component, and according to the fourth image and the information of the fifth image to obtain the image of the target object. Specifically, the second processing component is configured to upsample the information of the fifth image to obtain the information of the sixth image, and the resolution of the sixth image corresponding to the information of the sixth image is the fourth image corresponding to the information of the fourth image The resolution is the same; the information of the fourth image and the information of the sixth image are fused to obtain the image of the target object.
此处,第二处理组件的可能的示例可参见第一处理组件的介绍,此处不再重复赘述。Here, for a possible example of the second processing component, reference may be made to the introduction of the first processing component, which will not be repeated here.
需要说明的是,上述实施例二的摄像头模组还可以包括红外线(infrared radiation,IR)滤光片,该IR滤光片可位于第二光学镜头组件与第一图像传感组件之间,和/或,第三光学镜头组件与第二图像传感组件之间。IR滤光片的介绍可参见前述相关描述,此处不再重复赘述。It should be noted that the camera module of the second embodiment may further include an infrared (infrared radiation, IR) filter, and the IR filter may be located between the second optical lens assembly and the first image sensing assembly, and /or, between the third optical lens assembly and the second image sensing assembly. For the introduction of the IR filter, reference may be made to the foregoing related descriptions, which will not be repeated here.
基于上述内容,下面结合具体的硬件结构,给出上述实施例二中摄像头模组的一种具体实现方式。以便于进一步理解上述摄像头模组的结构及目标物体的图像的形成过程。Based on the above content, a specific implementation manner of the camera module in the second embodiment above is given below in conjunction with the specific hardware structure. In order to further understand the structure of the above-mentioned camera module and the formation process of the image of the target object.
如图11所示,为本申请提供的另一种摄像头模组的结构示意图。该摄像头模组可包括第二光学镜头组件、第三光学镜头组件、第一图像传感组件、第二图像传感组件、位于第二光学镜头组件与第一图像传感组件之间的第二偏振片以及第二处理组件。其中,第一图像传感组件以上述图7中的(a)为例,第二图像传感组件以上述图7中的(b)为例。该示例中第二光学镜头组件和第三光学镜头组件可用一个透镜表示,具体的结构可参见前述第一光学透镜组件的介绍。As shown in FIG. 11 , it is a schematic structural diagram of another camera module provided by the present application. The camera module may include a second optical lens assembly, a third optical lens assembly, a first image sensing assembly, a second image sensing assembly, a second optical lens assembly between the second optical lens assembly and the first image sensing assembly a polarizer and a second processing component. The first image sensing component is taken as an example in (a) in the above-mentioned FIG. 7 , and the second image sensing component is taken as an example in the above-mentioned (b) in FIG. 7 . In this example, the second optical lens assembly and the third optical lens assembly can be represented by a single lens, and for the specific structure, please refer to the foregoing description of the first optical lens assembly.
关于第二光学镜头组件、第三光学镜头组件、第一图像传感组件、第二图像传感组件、第而偏振片以及第二处理组件的可能的实现方式可分别参见前述相关描述,此处不再重复赘述。For possible implementations of the second optical lens assembly, the third optical lens assembly, the first image sensing assembly, the second image sensing assembly, the second polarizer, and the second processing assembly, please refer to the foregoing related descriptions respectively, here It will not be repeated.
基于图11所示的摄像头模组,来自目标物体的光线通过第二光学镜头组件得到第三光线,第三光线被传播至第一图像传感组件;来自目标物体的光线通过第三光学镜头组件得到第四光线,第四光线被传播至第二偏振片,经第二偏振片后传播至第二图像传感组件。第一图像传感组件将第三光线转换为第四电信号(即第四图像的信息),第二图像传感组件将第四光线变为第五电信号(第五图像的信息),第四电信号可再经A/D后变为第三数字图像信号,第五电信号可再经A/D后变为第四数字图像信号,第三数字图像信号和第四数字图像信号均传输至ISP中进行加工处理(例如融合),得到特定格式的目标物体的图像。进一步,可选地,可由ISP将目标物体的图像传输到显示屏上进行显示。Based on the camera module shown in Figure 11, the light from the target object passes through the second optical lens assembly to obtain the third light, and the third light is propagated to the first image sensing assembly; the light from the target object passes through the third optical lens assembly The fourth light is obtained, the fourth light is propagated to the second polarizer, and then propagated to the second image sensing component after passing through the second polarizer. The first image sensing component converts the third light into a fourth electrical signal (that is, the information of the fourth image), the second image sensing component converts the fourth light into a fifth electrical signal (the information of the fifth image), and the The four electrical signals can be converted into a third digital image signal after A/D, the fifth electrical signal can be converted into a fourth digital image signal after A/D, and both the third digital image signal and the fourth digital image signal are transmitted Go to the ISP for processing (eg fusion) to obtain an image of the target object in a specific format. Further, optionally, the ISP can transmit the image of the target object to the display screen for display.
基于上述描述的摄像头模组的结构和功能原理,本申请还可以提供一种终端设备。该终端设备可以包括上述实施例一中的摄像头模组或者也可包括上述实施例二中的摄像头模组。进一步,可选地,终端设备还可包括存储器和处理器,存储器用于存储程序或指令;处理器用于调用程序或指令控制上述摄像头模组获取目标物体的图像。可以理解的是,该终端设备还可以包括其他器件,例如无线通信装置、触摸屏和显示屏等。Based on the structure and functional principle of the camera module described above, the present application can also provide a terminal device. The terminal device may include the camera module in the first embodiment above or may also include the camera module in the second embodiment above. Further, optionally, the terminal device may further include a memory and a processor, where the memory is used for storing programs or instructions; the processor is used for calling the programs or instructions to control the above-mentioned camera module to acquire the image of the target object. It can be understood that the terminal device may also include other devices, such as a wireless communication device, a touch screen, a display screen, and the like.
如图12所示,为本申请提供的一种终端设备的结构示意图。该终端设备1200可包括处理器1201、存储器1202、摄像头模组1203和显示屏1204等。应理解,图12所示的硬件结构仅是一个示例。本申请所适用的终端设备可以具有比图12中所示终端设备更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图12中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。As shown in FIG. 12 , it is a schematic structural diagram of a terminal device provided by this application. The terminal device 1200 may include a processor 1201, a memory 1202, a camera module 1203, a display screen 1204, and the like. It should be understood that the hardware structure shown in FIG. 12 is only an example. The terminal device to which the present application applies may have more or fewer components than the terminal device shown in FIG. 12 , may combine two or more components, or may have different component configurations. The various components shown in Figure 12 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
其中,处理器1201可以包括一个或多个处理单元。例如:处理器1201可以包括应用处理器1201(application processor,AP)、图形处理器1201(graphics processing unit,GPU)、图像信号处理器1201(image signal processor,ISP)、控制器、数字信号处理器1201(digital signal processor,DSP)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器1201中。The processor 1201 may include one or more processing units. For example, the processor 1201 may include an application processor 1201 (application processor, AP), a graphics processor 1201 (graphics processing unit, GPU), an image signal processor 1201 (image signal processor, ISP), a controller, a digital signal processor 1201 (digital signal processor, DSP) and so on. Wherein, different processing units may be independent devices, or may be integrated in one or more processors 1201 .
存储器1202可以是随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储器1202耦合至处理器1201,从而使处理器1201能够从该存储器1202读取信息,且可向该存储器1202写入信息。当然,存储器1202也可以是处理器1201的组成部分。当然,处理器1201和存储器1202也可以作为分立组件存在于终端设备中。The memory 1202 may be random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), registers, hard disk, removable hard disk, CD-ROM or any other form of storage medium well known in the art. An exemplary memory 1202 is coupled to the processor 1201 such that the processor 1201 can read information from, and write information to, the memory 1202 . Of course, the memory 1202 may also be a component of the processor 1201 . Of course, the processor 1201 and the memory 1202 may also exist in the terminal device as discrete components.
摄像头模组1203可以用于捕获动、静态图像等。在一些实施例中,终端设备可以包括一个或N个摄像头模组1203,N为正整数。摄像头模组1203的介绍可参见前述实施例的描述,此处不再重复赘述。The camera module 1203 can be used to capture moving and still images, and the like. In some embodiments, the terminal device may include one or N camera modules 1203, where N is a positive integer. For the introduction of the camera module 1203, reference may be made to the descriptions of the foregoing embodiments, which will not be repeated here.
当摄像头模组1203应用为车载摄像头模组时,基于车载摄像头模组的功能,可分为行车辅助类摄像头模组、驻车辅助类摄像头模组与车内驾驶员监控摄像头模组。行车辅助类摄像头模组用于行车记录、车道偏离预警、开门预警,盲区监测及交通标示识别等。行车辅助类摄像头模组包括智能前视(如单目/双目/三目),可用于动态物体检测(车辆、行人)、静态物体检测(交通信号灯、交通标志、车道线等)和可通行空间划分等;侧视辅助(如广角),用于行车过程中监测后视镜盲区内的动态目标;夜视辅助(如夜视摄像头),可用于夜间或其他光线较差的情况下更好的实现目标物体的检测。驻车辅助类摄像头模组可用于倒车影像/360°环视,360°环视(如广角/鱼眼),主要用于低速近距离感知,可形成一幅车辆四周无缝隙的360度全景俯视图。车内驾驶员监控摄像头模组,主要针对驾驶员的疲劳、分神、不规范驾驶等危险情况进行一层或多层预警。基于车载摄像头模组在终端设备中的安装位置不同,可分为前视摄像头模组、侧视摄像头模组、后视摄像头模组和内置摄像头模组。When the camera module 1203 is applied as a vehicle camera module, based on the functions of the vehicle camera module, it can be divided into a driving assistance camera module, a parking assistance camera module and an in-vehicle driver monitoring camera module. Driving assistance camera modules are used for driving records, lane departure warning, door opening warning, blind spot monitoring and traffic sign recognition. Driving assistance camera modules include intelligent forward vision (such as monocular/binocular/trinocular), which can be used for dynamic object detection (vehicles, pedestrians), static object detection (traffic lights, traffic signs, lane lines, etc.) and passable Space division, etc.; side view assistance (such as wide-angle), used to monitor dynamic targets in the blind area of the rearview mirror during driving; night vision assistance (such as night vision camera), which can be used at night or in other situations with poor light. to achieve the detection of target objects. The parking assist camera module can be used for reversing image/360° surround view, 360° surround view (such as wide-angle/fisheye), mainly for low-speed close-up perception, and can form a seamless 360-degree panoramic top view around the vehicle. The in-vehicle driver monitoring camera module mainly provides one or more layers of early warning for dangerous situations such as driver fatigue, distraction, and irregular driving. Based on the different installation positions of the vehicle camera module in the terminal device, it can be divided into a front-view camera module, a side-view camera module, a rear-view camera module and a built-in camera module.
显示屏1204可以用于显示图像、视频等。显示屏1204可以包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD)、有机发光二极管(organic light-emitting diode,OLED)、有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED)、柔性发光二极管(flex light-emitting diode,FLED)、Miniled、MicroLed、Micro-oLed、量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,终端设备可以包括1个或Q个显示屏1204,Q为大于1的正整数。示例的,终端设备可以通过GPU、显示屏1204、以及处理器1201等实现显示功能。 Display screen 1204 may be used to display images, video, and the like. Display screen 1204 may include a display panel. The display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode, or an active matrix organic light emitting diode (active-matrix organic light). emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on. In some embodiments, the terminal device may include one or Q display screens 1204 , where Q is a positive integer greater than one. For example, the terminal device may implement a display function through a GPU, a display screen 1204, a processor 1201, and the like.
示例性地,终端设备例如可以是车辆、智能手机、智能家居设备、智能制造设备、机器人、无人机或智能运输设备(如AGV或者无人运输车等)等。Exemplarily, the terminal device may be, for example, a vehicle, a smart phone, a smart home device, a smart manufacturing device, a robot, an unmanned aerial vehicle, or an intelligent transportation device (such as an AGV or an unmanned transportation vehicle, etc.).
基于上述内容和相同的构思,本申请提供一种成像方法,请参阅图13的介绍。该成像方法可应用于上述实施例一所示的任一摄像头模组。也可以理解为,可以基于上述实施例一所示的任一摄像头模组来实现该成像方法。Based on the above content and the same concept, the present application provides an imaging method, please refer to the introduction of FIG. 13 . The imaging method can be applied to any camera module shown in the first embodiment. It can also be understood that the imaging method can be implemented based on any camera module shown in the first embodiment above.
如图13所示,该成像方法包括以下步骤:As shown in Figure 13, the imaging method includes the following steps:
步骤1301,接收来自目标物体的光线。Step 1301, receiving light from the target object.
该步骤1301可由第一光学镜头组件实现,可能可参见前述第一光学镜头组件接收来自目标物体的光线的介绍,此处不再重复赘述。This step 1301 can be implemented by the first optical lens assembly, which may refer to the above-mentioned introduction of the first optical lens assembly receiving light from the target object, and details are not repeated here.
步骤1302,对目标物体的光线进行分光,得到第一光线和第二光线,并向第一图像传 感组件传播第一光线,向第二图像传感组件传播第二光线。Step 1302, splitting the light of the target object to obtain the first light and the second light, and propagating the first light to the first image sensing component and the second light to the second image sensing component.
此处,第一光线与第二光线的强度的比值小于1。Here, the ratio of the intensities of the first light ray to the second light ray is less than 1.
该步骤1302可由分光组件实现,可能的实现方式可参见前述相关描述,此处不再重复赘述。This step 1302 may be implemented by a light splitting component. For possible implementation manners, reference may be made to the foregoing related descriptions, which will not be repeated here.
在一种可能的实现方式中,对第一光线进行光电转换,得到第一图像的信息;对第二光线进行光电转换,得到第二图像的信息;根据第一图像的信息和第二图像的信息,生成目标物体的图像。进一步,可选地,可对第二图像的信息进行上采样,得到第三图像的信息,融合第一图像的信息与第三图像的信息,得到目标物体的图像,其中,第三图像的信息对应的第三图像与第一图像的信息对应的第一图像的分辨率相同。该过程可由第一处理组件实现,可能的实现方式可参见前述第一处理组件的相关描述,此处不再重复赘述。In a possible implementation manner, photoelectric conversion is performed on the first light to obtain the information of the first image; photoelectric conversion is performed on the second light to obtain the information of the second image; according to the information of the first image and the information of the second image information to generate an image of the target object. Further, optionally, the information of the second image can be upsampled to obtain the information of the third image, and the information of the first image and the information of the third image can be fused to obtain the image of the target object, wherein the information of the third image is The corresponding third image has the same resolution as the first image corresponding to the information of the first image. This process may be implemented by the first processing component, and for possible implementation manners, reference may be made to the relevant description of the aforementioned first processing component, which will not be repeated here.
基于上述内容和相同的构思,本申请提供一种成像方法,请参阅图14的介绍。该成像方法可应用于上述实施例二所示的任一摄像头模组。也可以理解为,可以基于上述实施例二所示的任一摄像头模组来实现该成像方法。Based on the above content and the same concept, the present application provides an imaging method, please refer to the introduction of FIG. 14 . The imaging method can be applied to any camera module shown in the second embodiment above. It can also be understood that the imaging method can be implemented based on any camera module shown in the second embodiment above.
如图14所示,该成像方法包括以下步骤:As shown in Figure 14, the imaging method includes the following steps:
步骤1401,接收来自目标物体的光线。Step 1401, receiving light from the target object.
步骤1402,向第一图像传感组件传播第三光线,向第二图像传感组件传播第四光线。Step 1402, propagating a third light ray to the first image sensing assembly, and propagating a fourth light ray to the second image sensing assembly.
该步骤1401和步骤1402均可由第二光学镜头组件和第三光学镜头组件实现,可能的实现方式可参见前述第二光学镜头组件和第三光学镜头组件的介绍,此处不再重复赘述。Both steps 1401 and 1402 can be implemented by the second optical lens assembly and the third optical lens assembly. For possible implementations, please refer to the introduction of the second optical lens assembly and the third optical lens assembly, which will not be repeated here.
在一种可能的实现方式中,可对接收到的第三光线进行光电转换,得到第四图像的信息;对接收到的第四光线进行光电转换,得到第五图像的信息;并根据第四图像的信息和第五图像的信息,得到目标物体的图像。具体地,对第五图像的信息进行上采样,得到第六图像的信息,融合第四图像的信息和第六图像的信息,得到目标物体的图像,其中,第六图像的信息对应的第六图像的分辨率与第四图像的信息对应的第四图像的分辨率相同。该过程可由第二处理组件实现,可能的实现方式可参见前述第二处理组件的相关描述,此处不再重复赘述。In a possible implementation manner, photoelectric conversion may be performed on the received third light to obtain information of the fourth image; photoelectric conversion may be performed on the received fourth light to obtain information of the fifth image; The information of the image and the information of the fifth image are used to obtain the image of the target object. Specifically, the information of the fifth image is up-sampled to obtain the information of the sixth image, and the information of the fourth image and the information of the sixth image are fused to obtain the image of the target object, wherein the information of the sixth image corresponds to the sixth image. The resolution of the image is the same as the resolution of the fourth image corresponding to the information of the fourth image. This process may be implemented by the second processing component, and for possible implementation manners, reference may be made to the relevant description of the foregoing second processing component, which will not be repeated here.
需要说明的是,上述成像方法可应用于车联网,如车辆外联(vehicle to everything,V2X)、车间通信长期演进技术(long term evolution-vehicle,LTE-V)、车辆-车辆(vehicle to everything,V2V)等。It should be noted that the above imaging method can be applied to the Internet of Vehicles, such as vehicle-to-everything (V2X), long term evolution-vehicle (LTE-V), vehicle-to-everything (vehicle to everything) , V2V) and so on.
本申请的实施例中的方法中部分步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于网络设备或终端设备中。当然,处理器和存储介质也可以作为分立组件存在。Some steps in the methods in the embodiments of the present application may be implemented by means of hardware, and may also be implemented by means of a processor executing software instructions. Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), registers, hard disks, removable hard disks, CD-ROMs or known in the art in any other form of storage medium. An exemplary storage medium is coupled to the processor, such that the processor can read information from, and write information to, the storage medium. Of course, the storage medium can also be an integral part of the processor. The processor and storage medium may reside in an ASIC. Alternatively, the ASIC may be located in a network device or in an end device. Of course, the processor and storage medium may also exist as discrete components.
在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑 关系可以组合形成新的实施例。In the various embodiments of the present application, if there is no special description or logical conflict, the terms and/or descriptions between different embodiments are consistent and can be referred to each other, and the technical features in different embodiments are based on their inherent Logical relationships can be combined to form new embodiments.
本申请中,“垂直”不是指绝对的垂直,可以允许有一定的误差。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。在本申请的文字描述中,字符“/”,一般表示前后关联对象是一种“或”的关系。在本申请的公式中,字符“/”,表示前后关联对象是一种“相除”的关系。另外,在本申请中,“示例性地”一词用于表示作例子、例证或说明。本申请中被描述为“示例”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。或者可理解为,使用示例的一词旨在以具体方式呈现概念,并不对本申请构成限定。In this application, "vertical" does not mean absolute vertical, and a certain error may be allowed. "And/or", which describes the association relationship of the associated objects, indicates that there can be three kinds of relationships, for example, A and/or B, which can indicate: the existence of A alone, the existence of A and B at the same time, and the existence of B alone, where A, B can be singular or plural. In the text description of this application, the character "/" generally indicates that the contextual objects are in an "or" relationship. In the formula of this application, the character "/" indicates that the related objects before and after are a "division" relationship. Also, in this application, the word "exemplarily" is used to mean serving as an example, illustration, or illustration. Any embodiment or design described in this application as "exemplary" should not be construed as preferred or advantageous over other embodiments or designs. Alternatively, it can be understood that the use of the word example is intended to present concepts in a specific manner, and not to limit the application.
可以理解的是,在本申请中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定。术语“第一”、“第二”等类似表述,是用于分区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元。方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It can be understood that, various numbers and numbers involved in the present application are only for the convenience of description, and are not used to limit the scope of the embodiments of the present application. The size of the sequence numbers of the above processes does not imply the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic. The terms "first", "second" and similar expressions are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. Furthermore, the terms "comprising" and "having" and any variations thereof, are intended to cover non-exclusive inclusion, eg, comprising a series of steps or elements. A method, system, product or device is not necessarily limited to those steps or units expressly listed, but may include other steps or units not expressly listed or inherent to the process, method, product or device.
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的方案进行示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。Although the application has been described in conjunction with specific features and embodiments thereof, it will be apparent that various modifications and combinations can be made therein without departing from the spirit and scope of the application. Accordingly, the present specification and drawings are merely illustrative of the approaches defined by the appended claims, and are deemed to cover any and all modifications, variations, combinations or equivalents within the scope of the present application.
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present application without departing from the spirit and scope of the present invention. Thus, if these modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to include these modifications and variations.
Claims (15)
- 一种摄像头模组,其特征在于,包括第一光学镜头组件、分光组件、第一图像传感组件和第二图像传感组件,所述第一图像传感组件与所述第二图像传感组件的感光面积相同,所述第一图像传感组件的分辨率大于所述第二图像传感组件的分辨率;A camera module is characterized by comprising a first optical lens assembly, a light splitting assembly, a first image sensing assembly and a second image sensing assembly, the first image sensing assembly and the second image sensing assembly The photosensitive areas of the components are the same, and the resolution of the first image sensing component is greater than the resolution of the second image sensing component;所述第一光学镜头组件,用于接收来自目标物体的光线;the first optical lens assembly for receiving light from the target object;所述分光组件,用于对经由所述第一光学镜头组件传播的光线进行分光,得到第一光线和第二光线,并向所述第一图像传感组件传播所述第一光线,向所述第二图像传感组件传播所述第二光线。The spectroscopic component is used for splitting the light propagating through the first optical lens component to obtain a first light ray and a second light ray, and propagating the first light ray to the first image sensing component, and to the The second image sensing assembly propagates the second light.
- 如权利要求1所述的摄像头模组,其特征在于,所述摄像头模组还包括位于所述分光组件与所述第一图像传感组件之间的第一偏振片。The camera module of claim 1, wherein the camera module further comprises a first polarizer located between the light splitting component and the first image sensing component.
- 如权利要求2所述的摄像头模组,其特征在于,所述第一偏振片的偏振方向垂直于经由所述第一光学镜头组件传播的眩光的主偏振方向。The camera module of claim 2, wherein the polarization direction of the first polarizer is perpendicular to the main polarization direction of the glare propagated through the first optical lens assembly.
- 如权利要求1至3任一项所述的摄像头模组,其特征在于,所述第一光线与所述第二光线的强度的比值小于1。The camera module according to any one of claims 1 to 3, wherein a ratio of the intensity of the first light to the second light is less than 1.
- 如权利要求1至4任一项所述的摄像头模组,其特征在于,所述第一图像传感组件,用于对接收到的所述第一光线进行光电转换,得到第一图像的信息;The camera module according to any one of claims 1 to 4, wherein the first image sensing component is used to perform photoelectric conversion on the received first light to obtain information of the first image ;所述第二图像传感组件,用于对接收到的所述第二光线进行光电转换,得到第二图像的信息;The second image sensing component is used to perform photoelectric conversion on the received second light to obtain information of the second image;其中,所述第一图像的信息和所述第二图像的信息用于形成所述目标物体的图像。Wherein, the information of the first image and the information of the second image are used to form the image of the target object.
- 如权利要求1至5任一项所述的摄像头模组,其特征在于,所述摄像头模组还包括第一处理组件,用于:The camera module according to any one of claims 1 to 5, wherein the camera module further comprises a first processing component for:接收来自所述第一图像传感组件的所述第一图像的信息、以及接收来自所述第二图像传感组件的所述第二图像的信息;receiving information from the first image of the first image sensing assembly, and receiving information from the second image of the second image sensing assembly;根据所述第一图像的信息和所述第二图像的信息,生成所述目标物体的图像。An image of the target object is generated according to the information of the first image and the information of the second image.
- 如权利要求6所述的摄像头模组,其特征在于,所述第一处理组件,用于:The camera module of claim 6, wherein the first processing component is used for:对所述第二图像的信息进行上采样,得到第三图像的信息,所述第三图像的信息对应的第三图像的分辨率与所述第一图像的信息对应的第一图像的分辨率相同;Up-sampling the information of the second image to obtain information of the third image, the resolution of the third image corresponding to the information of the third image and the resolution of the first image corresponding to the information of the first image same;融合所述第一图像的信息与所述第三图像的信息,得到所述目标物体的图像。The image of the target object is obtained by fusing the information of the first image and the information of the third image.
- 一种终端设备,其特征在于,包括如权利要求1至7任一项所述的摄像头模组。A terminal device, comprising the camera module according to any one of claims 1 to 7.
- 如权利要求8所述的终端设备,其特征在于,所述终端设备包括以下任一项:The terminal device according to claim 8, wherein the terminal device comprises any one of the following:智能手机、车辆、智能家居设备、智能制造设备、机器人、无人机、测绘设备或智能运输设备。Smartphones, vehicles, smart home equipment, smart manufacturing equipment, robots, drones, mapping equipment or smart transportation equipment.
- 一种成像方法,其特征在于,应用于摄像头模组,所述摄像头模组包括第一图像传感组件和第二图像传感组件,所述第一图像传感组件与所述第二图像传感组件的感光面积相同,所述第一图像传感组件的分辨率大于所述第二图像传感组件的分辨率;An imaging method, characterized in that it is applied to a camera module, the camera module includes a first image sensing component and a second image sensing component, the first image sensing component and the second image sensor component The photosensitive areas of the sensing components are the same, and the resolution of the first image sensing component is greater than the resolution of the second image sensing component;接收来自目标物体的光线;Receive light from the target object;对所述目标物体的光线进行分光,得到第一光线和第二光线,并向所述第一图像传感组件传播所述第一光线,向所述第二图像传感组件传播所述第二光线。Splitting the light of the target object to obtain first light and second light, and propagating the first light to the first image sensing assembly, and propagating the second light to the second image sensing assembly light.
- 如权利要求10所述的方法,其特征在于,所述摄像头模组还包括位于所述分光组 件与所述第一图像传感组件之间的第一偏振片。The method of claim 10, wherein the camera module further comprises a first polarizer located between the light splitting component and the first image sensing component.
- 如权利要求11所述的方法,其特征在于,所述第一偏振片的偏振方向垂直于经由所述第一光学镜头组件传播的眩光的主偏振方向。The method of claim 11, wherein the polarization direction of the first polarizer is perpendicular to the main polarization direction of the glare propagating through the first optical lens assembly.
- 如权利要求10至12任一项所述的方法,其特征在于,所述第一光线与所述第二光线的强度的比值小于1。The method according to any one of claims 10 to 12, wherein the ratio of the intensity of the first light ray to the second light ray is less than 1.
- 如权利要求10至13任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 10 to 13, wherein the method further comprises:对所述第一光线进行光电转换,得到第一图像的信息;performing photoelectric conversion on the first light to obtain information of the first image;对所述第二光线进行光电转换,得到第二图像的信息;performing photoelectric conversion on the second light to obtain information of the second image;根据所述第一图像的信息和所述第二图像的信息,生成所述目标物体的图像。An image of the target object is generated according to the information of the first image and the information of the second image.
- 如权利要求14所述的方法,其特征在于,所述根据所述第一图像的信息和所述第二图像的信息,生成所述目标物体的图像,包括:The method of claim 14, wherein the generating the image of the target object according to the information of the first image and the information of the second image comprises:对所述第二图像的信息进行上采样,得到第三图像的信息,所述第三图像的信息对应的第三图像的分辨率与所述第一图像的信息对应的第一图像的分辨率相同;Up-sampling the information of the second image to obtain information of the third image, the resolution of the third image corresponding to the information of the third image and the resolution of the first image corresponding to the information of the first image same;融合所述第一图像的信息与所述第三图像的信息,得到所述目标物体的图像。The image of the target object is obtained by fusing the information of the first image and the information of the third image.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110311311.XA CN115134480A (en) | 2021-03-24 | 2021-03-24 | Camera module, terminal equipment and imaging method |
CN202110311311.X | 2021-03-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022199416A1 true WO2022199416A1 (en) | 2022-09-29 |
Family
ID=83374209
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/080768 WO2022199416A1 (en) | 2021-03-24 | 2022-03-14 | Camera module, terminal device, and imaging method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115134480A (en) |
WO (1) | WO2022199416A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107563971A (en) * | 2017-08-12 | 2018-01-09 | 四川精视科技有限公司 | A kind of very color high-definition night-viewing imaging method |
CN111198445A (en) * | 2018-11-16 | 2020-05-26 | 华为技术有限公司 | Equipment and method for light-splitting polarization imaging |
US20200191657A1 (en) * | 2014-09-11 | 2020-06-18 | Mazen Zawaideh | Imaging Spectropolarimeter |
CN111726493A (en) * | 2020-06-17 | 2020-09-29 | Oppo广东移动通信有限公司 | Camera module and terminal equipment |
CN111756969A (en) * | 2020-06-16 | 2020-10-09 | RealMe重庆移动通信有限公司 | Optical module and electronic equipment |
-
2021
- 2021-03-24 CN CN202110311311.XA patent/CN115134480A/en active Pending
-
2022
- 2022-03-14 WO PCT/CN2022/080768 patent/WO2022199416A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200191657A1 (en) * | 2014-09-11 | 2020-06-18 | Mazen Zawaideh | Imaging Spectropolarimeter |
CN107563971A (en) * | 2017-08-12 | 2018-01-09 | 四川精视科技有限公司 | A kind of very color high-definition night-viewing imaging method |
CN111198445A (en) * | 2018-11-16 | 2020-05-26 | 华为技术有限公司 | Equipment and method for light-splitting polarization imaging |
CN111756969A (en) * | 2020-06-16 | 2020-10-09 | RealMe重庆移动通信有限公司 | Optical module and electronic equipment |
CN111726493A (en) * | 2020-06-17 | 2020-09-29 | Oppo广东移动通信有限公司 | Camera module and terminal equipment |
Also Published As
Publication number | Publication date |
---|---|
CN115134480A (en) | 2022-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6795030B2 (en) | Imaging control device, imaging control method, and imaging device | |
WO2019192418A1 (en) | Automobile head-up display system and obstacle prompting method thereof | |
US9317754B2 (en) | Object identifying apparatus, moving body control apparatus, and information providing apparatus | |
US9304301B2 (en) | Camera hardware design for dynamic rearview mirror | |
CN104076514B (en) | A kind of automobile information display method and device | |
CN101088027B (en) | Stereo camera for a motor vehicle | |
US20170234976A1 (en) | High Dynamic Range Imaging of Environment with a High Intensity Reflecting/Transmitting Source | |
US10960829B2 (en) | Movable carrier auxiliary system and control method thereof | |
CN104039610B (en) | Camera chain, the camera chain particularly for vehicle | |
JP6981410B2 (en) | Solid-state image sensor, electronic equipment, lens control method and vehicle | |
WO2020155739A1 (en) | Image sensor, method for acquiring image data from image sensor, and camera device | |
Wang et al. | On the application of cameras used in autonomous vehicles | |
JP2020150427A (en) | Imaging device, imaging optical system, and moving object | |
US9197819B2 (en) | Exposure-suppressing imaging system | |
WO2022199416A1 (en) | Camera module, terminal device, and imaging method | |
KR102552490B1 (en) | Image processing apparatus and method for vehicle | |
US20230098424A1 (en) | Image processing system, mobile object, image processing method, and storage medium | |
JP7484904B2 (en) | Image pickup device, signal processing device, signal processing method, program, and image pickup device | |
JP2015194388A (en) | Imaging device and imaging system | |
CN114360471B (en) | Display adjustment method, device, system and storage medium | |
CN108566507A (en) | A kind of special forward looking camera platforms of round-the-clock ADAS | |
JPWO2017028848A5 (en) | ||
CN116953898A (en) | Optical imaging module, optical imaging system and terminal equipment | |
JP2001318398A (en) | Vehicle periphery viewing device | |
JP7244129B1 (en) | night vision camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22774078 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 22774078 Country of ref document: EP Kind code of ref document: A1 |