WO2021022696A1 - Image acquisition apparatus and method, electronic device and computer-readable storage medium - Google Patents
Image acquisition apparatus and method, electronic device and computer-readable storage medium Download PDFInfo
- Publication number
- WO2021022696A1 WO2021022696A1 PCT/CN2019/116308 CN2019116308W WO2021022696A1 WO 2021022696 A1 WO2021022696 A1 WO 2021022696A1 CN 2019116308 W CN2019116308 W CN 2019116308W WO 2021022696 A1 WO2021022696 A1 WO 2021022696A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- dimensional image
- module
- dimensional
- scene
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 230000004927 fusion Effects 0.000 claims abstract description 77
- 238000013507 mapping Methods 0.000 claims abstract description 51
- 238000005070 sampling Methods 0.000 claims abstract description 21
- 238000001514 detection method Methods 0.000 claims description 24
- 230000015654 memory Effects 0.000 claims description 18
- 238000006243 chemical reaction Methods 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 9
- 230000003068 static effect Effects 0.000 claims description 8
- 230000003595 spectral effect Effects 0.000 claims description 5
- 230000000875 corresponding effect Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 14
- 238000004422 calculation algorithm Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000002457 bidirectional effect Effects 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 238000007499 fusion processing Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 229910044991 metal oxide Inorganic materials 0.000 description 2
- 150000004706 metal oxides Chemical class 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000001944 accentuation Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
Definitions
- the present invention relates to the field of imaging technology, in particular to an image acquisition device and method, electronic equipment, and computer-readable storage medium.
- the two-dimensional imaging technology of traditional image sensors has become increasingly mature, and the images collected can be in color mode or grayscale mode, and have high resolution.
- the collected image information is incomplete to a certain extent, such as lack of depth information.
- the exemplary image acquisition method cannot meet the user's needs for high-resolution images and complete image information.
- An image acquisition device including:
- An acquisition module configured to acquire a first two-dimensional image, a second two-dimensional image, and a three-dimensional image of the scene; the resolution of the first two-dimensional image is higher than the resolution of the second two-dimensional image;
- a registration module connected to the acquisition module, configured to perform image registration on the first two-dimensional image and the second two-dimensional image, and obtain the first two-dimensional image and the second two-dimensional image The mapping relationship;
- a fusion module which connects the acquisition module and the registration module, and is configured to perform image fusion on the three-dimensional image and the first two-dimensional image according to the mapping relationship to obtain a fused image
- the up-sampling module is connected to the fusion module and is configured to up-sample the fused image to obtain a high-resolution three-dimensional depth image.
- the collection module includes:
- the first collection unit is respectively connected to the registration module and the fusion module, and is configured to collect image light reflected by the scene to form a first two-dimensional image;
- the second collection unit is respectively connected to the registration module and the fusion module, and is configured to collect a second two-dimensional image of the scene and output to the registration module, and collect the three-dimensional image of the scene and output to the fusion module .
- the second collection unit includes:
- the control circuit is configured to control the light source to emit the first pulsed light to the scene
- a detection circuit connected to the control circuit, configured to receive the second pulsed light reflected back from the scene, and detect the two-dimensional gray value and/or depth information according to the first pulsed light and the second pulsed light;
- the readout circuit, the detection circuit, the registration module, and the fusion module, respectively, are configured to read out the second two-dimensional image information according to the two-dimensional gray value, and read out the three-dimensional image according to the depth information information.
- the detection circuit includes:
- a photodetector connected to the control circuit, configured to detect the second pulsed light reflected by the scene and generate a trigger signal
- the conversion circuit is connected to the photodetector and the readout circuit, and is configured to detect the two-dimensional gray value according to the trigger signal in the two-dimensional mode, and according to the first pulsed light and the second pulsed light in the three-dimensional mode. Pulse light obtains distance information.
- the conversion circuit includes:
- the first switch, the second switch, the counter, the oscillator and the decoder The first switch, the second switch, the counter, the oscillator and the decoder
- the static contact of the first switch is connected to the input end of the counter, the first dynamic contact of the first switch, the first end of the oscillator, and the output end of the photodetector are connected in common, so The second dynamic contact of the first switch, the second end of the oscillator, and the first end of the decoder are connected in common, and the second end of the decoder is connected to the static contact of the second switch, The dynamic contact of the second switch and the output end of the counter are commonly connected to the input end of the readout circuit.
- the collection module further includes:
- the filter unit is arranged between the scene and the first collection unit, and/or between the scene and the second collection unit, and is configured to perform spectral filtering on the scene reflected light.
- the collection module further includes:
- the dimming unit is arranged between the scene and the first collection unit, and/or between the scene and the second collection unit, and is configured to adjust the light intensity of the scene reflected light.
- the collection module further includes:
- the switch unit is respectively connected to the second acquisition unit, the registration module, and the fusion module, and is configured to control the connection state of the second acquisition unit and the registration module, and control the second acquisition unit Connection status with the fusion module.
- the registration module includes:
- An extraction unit connected to the acquisition module, and configured to respectively extract feature point groups corresponding to the first two-dimensional image and the second two-dimensional image;
- the mapping unit is connected to the extraction unit and the fusion module, and is configured to obtain the mapping relationship between the first two-dimensional image and the second two-dimensional image according to a feature point group.
- the fusion module includes:
- a transformation unit connected to the acquisition module and the registration module, and configured to acquire a projective transformed image according to the three-dimensional image and the mapping relationship;
- the image fusion unit is connected to the transformation unit and the up-sampling module, and is configured to fuse the projection transformed image and the first two-dimensional image to obtain a fused image.
- An image acquisition method including:
- the resolution of the first two-dimensional image is higher than the resolution of the second two-dimensional image
- An electronic device includes a memory and a processor, and a computer program is stored in the memory.
- the processor executes the steps of the image acquisition method described above.
- a computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the steps of the image acquisition method described above are realized.
- the above-mentioned image acquisition device and image acquisition method collect a scene to obtain a first two-dimensional image, a second two-dimensional image and a three-dimensional image with different resolutions and different image information, and then perform relevant registration on the different images , Obtain the mapping relationship between different images, merge images with different resolutions and different image information according to the mapping relationship, and further improve the resolution of the fused image through upsampling, so as to obtain a high-resolution three-dimensional depth image, and realize the image
- the fusion of different characteristic information such as color information, gray information and depth information of the image information improves the integrity of image information.
- FIG. 1 is a schematic diagram of the structure of an image acquisition device in an embodiment
- FIG. 2 is a schematic diagram of the structure of a registration module in an embodiment
- Figure 3 is a schematic structural diagram of a fusion module in an embodiment
- FIG. 4 is a schematic diagram of the structure of the acquisition module in an embodiment
- Figure 5 is a schematic diagram of the structure of a second collection unit in an embodiment
- FIG. 6 is a schematic diagram of the structure of a detection circuit in an embodiment
- Fig. 7 is a circuit diagram of a conversion circuit in an embodiment
- FIG. 9 is a schematic structural diagram of a collection module in another embodiment.
- FIG. 10 is a schematic circuit diagram of a switch unit in an embodiment
- Fig. 11 is a flowchart of an image acquisition method in an embodiment.
- FIG. 1 is a schematic structural diagram of an image acquisition device in an embodiment.
- the image acquisition device includes an acquisition module 100, a registration module 200, a fusion module 300, and an up-sampling module 400.
- the acquisition module 100 is configured to acquire a first two-dimensional image, a second two-dimensional image, and a three-dimensional image of the scene, and the resolution of the first two-dimensional image is higher than the resolution of the second two-dimensional image.
- the registration module 200 connected to the acquisition module 100, is configured to perform image registration on the first two-dimensional image and the second two-dimensional image, and obtain the mapping relationship between the first two-dimensional image and the second two-dimensional image.
- the fusion module 300 is connected to the acquisition module 100 and the registration module 200, and is configured to perform image fusion between the three-dimensional image and the first two-dimensional image according to the mapping relationship to obtain a fused image.
- the up-sampling module 400 connected to the fusion module 300, is configured to up-sample the fused image information to obtain a high-resolution three-dimensional depth image.
- the collection module 100 integrates multiple collection units, and can collect scenes from the same location or different locations in different collection methods, so as to obtain the first two-dimensional image, the second two-dimensional image, and the three-dimensional image.
- the image of the source channel is the image of the source channel.
- the scene is the target collection object, which is composed of one or more objects, and the image corresponding to the scene can be obtained by collecting the scene.
- the first two-dimensional image is an image with only two-dimensional plane information of the scene but no depth direction information; the first two-dimensional image may be an RGB color mode image or a grayscale mode image.
- the first two-dimensional image has a high resolution.
- the second two-dimensional image refers to the two-dimensional grayscale image information of the scene, and the resolution is lower than the first two-dimensional image;
- the three-dimensional image refers to the image with the depth direction information of the scene, and the resolution is lower than the first two-dimensional image.
- a three-dimensional depth image refers to an image with high-resolution image information and depth direction information.
- the first two-dimensional image, the second two-dimensional image, and the three-dimensional image may respectively correspond to multiple images; and the first two-dimensional image, the second two-dimensional image, and the three-dimensional image may simultaneously correspond to the same scene; or the second two-dimensional image
- the image and a certain first two-dimensional image correspond to the same scene, and the three-dimensional image and another certain first two-dimensional image correspond to the same scene.
- different acquisition methods include traditional image sensing acquisition methods and depth sensing acquisition methods.
- overlapping area of the collected perspectives that is, a common area.
- the first two-dimensional image, the second two-dimensional image, and the three-bit depth image have corresponding points, such as corresponding feature points.
- the collection module 100 can collect the first two-dimensional image, the second two-dimensional image, and the three-dimensional image at different times when collecting different images of the same scene.
- the first two-dimensional image and the second two-dimensional image can be collected first.
- the registration module 200 obtains the mapping relationship
- the first two-dimensional image and the three-dimensional image are collected for fusion processing;
- the first two-dimensional image, the second two-dimensional image, and the three-dimensional image can also be collected at the same time.
- the module 200 acquires the mapping relationship, and the fusion module 300 performs fusion processing on the first two-dimensional image and the three-dimensional image; or, when the image acquisition device has stored the mapping relationship, the acquisition module 100 may only acquire the first two-dimensional image during subsequent image processing.
- the first two-dimensional image and the three-dimensional image are then directly fused by the fusion module 300 to perform the fusion processing on the first two-dimensional image and the three-dimensional image, avoiding repeated acquisition of the second two-dimensional image, and reducing the power consumption of the acquired image.
- the acquisition module 100 integrates multiple acquisition units, the acquired first two-dimensional image and the second two-dimensional image are different due to differences in the acquisition method, location, field of view, and resolution. There is a certain degree of difference between the information between the two, the registration module 200 as the pre-processing module of the fusion module 300 can perform relevant registration on the different images, and obtain the mapping relationship between different images, so that subsequent images are aligned according to the mapping relationship Fusion.
- the registration module 200 can obtain the common area of the first two-dimensional image and the second two-dimensional image through image processing and existing algorithm operations, and extract the corresponding area or correspondence between the two images in the common area. Feature points and establish a coordinate system, and calculate the mapping relationship between images according to the location coordinates of the corresponding area or corresponding feature points.
- the existing algorithm operations include Homography matrix (H matrix), interpolation algorithm and the combination of the two.
- the registration module 200 includes an extraction unit 201 and a mapping unit 202.
- the extraction unit 201 is connected to the acquisition module 100 and is configured to extract feature point groups corresponding to the first two-dimensional image and the second two-dimensional image respectively.
- the characteristic points include edges, contours, intersections on curved surfaces, and points of high curvature.
- the extracting unit 201 may extract corresponding feature point groups between different images on the common area of the first two-dimensional image and the second two-dimensional image by using existing algorithms.
- the extraction unit 201 can directly extract the respective high-frequency component images in the two common areas, extract the corresponding feature point group from the high-frequency component images, and obtain the mapping relationship, thereby reducing the computational complexity and improving the registration rate.
- the extraction unit 201 includes a filter and a processor.
- the filter is configured to perform filtering processing on the first two-dimensional image and the second two-dimensional image respectively to obtain the first two-dimensional image high-frequency component and the second two-dimensional image high-frequency component;
- the processor is connected to the filter and is configured to The feature point groups of the first two-dimensional image high-frequency component and the second two-dimensional image high-frequency component are respectively extracted.
- the mapping unit 202 is connected to the extraction unit 201 and the fusion module 300, and is configured to obtain the mapping relationship between the first two-dimensional image and the second two-dimensional image according to the feature point group. In an embodiment, the mapping unit 202 first calculates a discrete coordinate mapping table based on the discrete feature point group, and then generates a complete coordinate mapping table based on the discrete coordinate mapping table and difference calculation to obtain the mapping relationship between images.
- the mapping unit 202 may be a data processing chip connected to the extraction unit 201.
- mapping relationship may be stored, so that the subsequent image acquisition device can directly use the mapping relationship for image alignment and fusion when it is used again.
- the fusion module 300 can fuse a plurality of different images collected by the collection module 100 to obtain a fusion image with different resolutions and different image information.
- the fusion image includes both three-dimensional image information and two-dimensional image information. Therefore, by fusing the image 300, the resolution of the original captured image can be improved, and at the same time, the color information, gray information, and depth information of the image can be merged to improve the integrity of the image information.
- the fusion module 300 includes a transformation unit 301 and an image fusion unit 302.
- the transformation unit 301 is connected to the acquisition module 100 and the registration module 200, and is configured to acquire the projection transformed image according to the three-dimensional image and the mapping relationship.
- the transform unit 301 transforms the three-dimensional image by using an interpolation algorithm and a mapping relationship to form a projected transformed image.
- the projection transformation image and the three-dimensional image have a common area image, and the position coordinates of each point are the same as the position coordinates of the corresponding points of the three-dimensional image.
- the transformation unit 301 may be an image processor.
- the image fusion unit 302 is connected to the transformation unit 301 and the up-sampling module 400, and is configured to fuse the projection transformed image and the first two-dimensional image to obtain a fused image.
- the image fusion unit 302 separately obtains the common areas of the projection transformed image and the three-dimensional image, adds the two common areas and averages them, or divides the areas into different weights for weighted synthesis as needed to obtain the fused image; or Algorithms such as multi-resolution tower image fusion, wavelet transform, and Kalman filtering are used to fuse the projection transformed image and the first two-bit image.
- the image fusion unit 302 may be an image fusion device, an image processor, a fusion controller, or the like.
- the up-sampling module 400 can up-sample the fused image obtained in the above-mentioned embodiment to further improve the resolution of image display, and generate a high-resolution three-dimensional depth image.
- the up-sampling module 400 may enlarge the fused image through an interpolation algorithm to improve the image enlargement effect.
- interpolation algorithms include but are not limited to nearest neighbor interpolation, bilinear interpolation, and cubic interpolation.
- the image acquisition device may also be provided with an image display module to visually display the three-dimensional depth images, or directly add to the fusion module 300 or the up-sampling module 400
- the display unit displays the image simultaneously during the fusion or up-sampling process.
- each module in the image acquisition device described above is only for illustration. In other embodiments, the image acquisition device can be divided into different modules as needed to complete all or part of the functions of the image acquisition device. .
- Each module in the above-mentioned image acquisition device can be implemented in whole or in part by software, hardware and a combination thereof.
- the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
- the above-mentioned registration module, fusion module, and up-sampling module are embedded in hardware or independent of the processor in the computer device.
- the image acquisition device includes an acquisition module, a registration module, a fusion module, and an up-sampling module.
- the acquisition module collects a scene to obtain a first two-dimensional image and a second two-dimensional image with different resolutions and different image information.
- the registration module performs relevant registration on the images with differences, and obtains the mapping relationship between different images, so that the fusion module will merge images with different resolutions and different image information according to the mapping relationship, and pass
- the up-sampling module further improves the resolution of the fused image, enabling the image acquisition device to obtain a high-resolution three-dimensional depth image, and at the same time realizes the fusion of different characteristic information such as color information, gray information and depth information of the image, and improves the integrity of image information .
- FIG. 4 is a schematic diagram of the structure of the acquisition module in an embodiment.
- the collection module 100 includes a first collection unit 101 and a second collection unit 102.
- the first collection unit 101 is respectively connected to the registration module 200 and the fusion module 300, and is configured to collect image light reflected by the scene to form a first two-dimensional image.
- the second acquisition unit 102 is respectively connected to the registration module 200 and the fusion module 300, and is configured to collect a second two-dimensional image of the scene and output to the registration module, and collect the three-dimensional image of the scene and output to the fusion module.
- the first acquisition unit 101 includes an image sensor, and the output ends of the image sensor are respectively connected to the registration module 200 and the fusion module 300 to convert the image light reflected by the scene into electrical signals.
- the electrical signals obtained can be directly derived from the image
- the sensor independently reads and processes data, or can assist other electronic components to read and process data to form the first two-dimensional image.
- the image sensor may be a pn junction diode, a CCD (Charge Coupled Device), CMOS (Complementary Metal Oxide Semiconductor, complementary metal oxide semiconductor) image sensor.
- the light source of the image light collected by the first collection unit 101 may be a built-in light source of the unit, an ambient light source or other external light sources.
- the light source is not limited to visible light RGB or infrared light.
- the second acquisition unit 102 includes a control circuit 1021, a detection circuit 1022, and a readout circuit 1023.
- the control circuit 1021 is configured to control the light source to emit the first pulsed light to the scene.
- the detection circuit 1022 is connected to the control circuit 1021 and is configured to receive the second pulsed light reflected from the scene, and detect the two-dimensional gray value and/or depth information according to the first pulsed light and the second pulsed light.
- the readout circuit 1023 is respectively connected to the detection circuit 1022, the registration module 200 and the fusion module 300, and is configured to read out the second two-dimensional image information according to the two-dimensional gray value and the three-dimensional image information according to the depth information.
- the control circuit 1021 controls the light source to emit the first pulsed light to the scene according to the start instruction, and at the same time controls the detection circuit 1022 to start work and make the detection circuit 1022 perform detection in a preset manner.
- the first pulsed light may be a continuous pulse of near infrared.
- the control circuit 1021 controls the emission direction, emission time and emission intensity of the light source.
- the control circuit 1021 includes one or more processors and memories, and a light source.
- the detection circuit 1022 receives the second pulsed light emitted back from the scene and converts the optical signal into an electrical signal, thereby recording the emission time of the first pulsed light, the receiving time of the second pulsed light, and the received pulse according to the electrical signal According to the number of photons in the second pulsed light, the two-dimensional gray value of the acquired image is calculated according to the number of photons of the second pulsed light, the time interval is acquired according to the emission time and the reception time, and then the depth information is acquired. It should be noted that the detection circuit 1022 can also obtain a two-dimensional gray value by detecting ambient light.
- the detection circuit 1022 is configured to have two switchable working modes.
- the two-dimensional mode the two-dimensional gray value of the acquired image is calculated according to the number of photons of the second pulsed light, and in the three-dimensional mode according to the emission time and The receiving time obtains the time interval and obtains the depth information, so as to switch the mode according to actual needs and obtain the corresponding information.
- the detection circuit 1022 includes a photodetector and a conversion circuit.
- the photodetector is configured to detect the second pulsed light reflected by the scene and generate a trigger signal.
- the photodetector includes SPAD (Single-Photon Avalanche Diode), or other photodetectors.
- the conversion circuit is connected to the photodetector, and is configured to detect the two-dimensional gray value according to the trigger signal in the two-dimensional mode, and obtain distance information according to the first pulsed light and the second pulsed light in the three-dimensional mode.
- different modes can be selected to obtain gray information or depth information correspondingly, so as to realize the sharing of internal circuits and save circuit costs.
- the number of photodetectors can be one or more.
- the conversion circuit corresponds to one, and the detection circuit scans point by point to detect the two-dimensional gray value or depth information, and obtain the two-dimensional gray value or depth information of the plane after multi-point detection.
- the corresponding conversion circuits are also distributed in an array and one conversion circuit corresponds to one photodetector (see Figure 6, taking the photodetector as SPAD as an example, where the photodetector is 601, and the conversion The circuit is 602), the detection circuit detects multiple points at the same time to obtain the two-dimensional gray value or depth information of the plane.
- the conversion circuit includes a first switch K1, a second switch K2, a counter J1, an oscillator Z1, and a decoder Y1 ,
- the static contact 0 of the first switch K1 is connected to the input end of the counter J1, the first dynamic contact 2D of the first switch K1, the first end of the oscillator Z1 and the output end of the photodetector SPAD are connected together, the first switch The second dynamic contact 3D of K1, the second end of the oscillator Z1, and the first end of the decoder Y1 are connected together.
- the second end of the decoder Y1 is connected to the static contact 1 of the second switch K2.
- the second switch The dynamic contact 2 of K2 and the output terminal of the counter J1 are connected to the input terminal of the readout circuit 1023 in common.
- the first dynamic contact 2D of the first switch K1 is closed, the second dynamic contact 3D of the first switch K1 is opened, the second switch K2 is opened, and the counter J1 is driven by SPAD to record the activation of SPAD
- scanning methods include mechanical scanning, MEMS scanning and optical phased array scanning.
- the first dynamic contact 2D of the first switch K1 is open, the second dynamic contact 3D of the first switch K1 is closed, the second switch K2 is closed, the counter J1, the oscillator Z1 and the decoder Y1 are combined
- a TDC Time-to-Digital Converter
- the counter J1 and the decoder Y1 are driven by the oscillator Z1 to record the flight time of the photon, so as to obtain the distance information of a single point, and then obtain it point by point by scanning The distance information of other points, and finally the depth information of the surface is obtained.
- the detection circuit 1022 can also be configured to collect two kinds of information at the same time, that is, to detect a two-dimensional gray value according to a trigger signal, and to obtain a value according to the first pulsed light and the second pulsed light. Distance information. Therefore, the two-dimensional gray value and the depth information can be obtained at the same time, so that the readout circuit 1023 can obtain the second two-dimensional image information and the three-dimensional image information at the same time.
- the readout circuit 1023 reads out the second two-dimensional image information according to the two-dimensional gray value, and reads out the three-dimensional image information according to the depth information.
- the detection circuit 1022 is configured to have two switchable working modes, correspondingly, the readout circuit 1023 reads out the second two-dimensional image information in the two-dimensional mode, and reads out the three-dimensional image information in the three-dimensional mode.
- the detection circuit 1022 is configured to collect two kinds of information at the same time, correspondingly, the readout circuit 1023 reads out the second two-dimensional image information and the three-dimensional image information at the same time.
- the specific device of the readout circuit 1023 is not limited, as long as it can realize the data reading function, for example, it may include a combination of resistors, capacitors, amplifiers, and samplers.
- the function of the second acquisition unit 102 to acquire the second two-dimensional image in the two-dimensional mode, the three-dimensional image in the three-dimensional mode, or the second acquisition can be realized.
- the unit 102 has the function of simultaneously collecting a second two-dimensional image and a three-dimensional image.
- the acquisition module 100 can switch to the two-dimensional mode when the registration module 200 requires registration, and only output the first two-dimensional image and the second two-dimensional image; in the fusion module When 300 needs to be fused, it switches to 3D mode and only outputs the first 2D image and 3D image, which can simultaneously optimize power consumption, chip area, and cost.
- first collection unit 101 and the second collection unit 102 of the foregoing embodiment can be integrated in a multi-sensor camera, for example, integrated in a camera with an image sensor and a depth sensor; or they can be respectively disposed in two cameras with multiple sensors. In the camera with independent sensor and auxiliary circuit.
- Fig. 8a-8d is a schematic diagram of the structure of the acquisition module in another embodiment.
- the collection module 100 further includes a filter unit 103.
- the filter unit 103 is arranged between the scene and the first acquisition unit 101, and/or is arranged between the scene and the first collection unit 101.
- Between the second collection units 102 ( Figure 8a is set between the scene and the first collection unit 101, and between the scene and the second collection unit 102 as an example), are configured to perform spectral filtering on the light reflected by the scene . In this way, the wavelength of the image light can be selected to realize single-wavelength imaging, or realize the acquisition of scene color information.
- the filter unit 103 may include one or more filters, or may be configured as a color filter array according to the number of image sensors in the first acquisition unit 101 and the number of photodetectors in the second acquisition unit 102, respectively.
- the color filter array is respectively registered with the image sensor array and the photodetector array, so that each color filter covers at least one image sensor and/or one photodetector, and the colors of the color filters corresponding to different devices can be the same or Different; or, each color filter can be optically coupled to at least two image sensors and/or two photodetectors.
- the collection module 100 further includes a dimming unit 104.
- the dimming unit 104 is arranged between the scene and the first collection unit 101, and/or between the scene and the second collection unit 102.
- Figure 8b takes the setting between the scene and the first collection unit 101, and the setting between the scene and the second collection unit 102 as examples), is configured to adjust the intensity of the light reflected by the scene, so that the collection The information is more accurate.
- the dimming unit 104 includes one or more flash lights; or includes a light accentuation damper.
- the collection module 100 includes one or two of the filter unit 103 and the dimming unit 104.
- the filter unit 103 and the dimming unit 104 The relative position of the unit 104 is not limited and can be adjusted according to actual conditions.
- the filter unit 103 and the dimming unit 104 are on the same optical axis, referring to FIG. 8c, the filter unit 103 may be provided in the dimming unit 104 and the first collection unit. Between 101, dimming is performed first and then filtering; or referring to FIG. 8d, the dimming unit 104 is arranged between the filtering unit 103 and the first collection unit 101, and the dimming is performed first and then dimming.
- the dimming unit 104 is a flash
- the flash can be set anywhere between the scene and the first collection unit 101, and the filter unit 103 The location of the setting can be irrelevant.
- FIG. 9 is a schematic structural diagram of a collection module in another embodiment.
- the acquisition module 100 further includes a switch unit 105, which is respectively connected to the second acquisition unit 102 and the registration module 200
- the fusion module 300 is configured to control the connection state of the second acquisition unit 102 and the registration module 200, and control the connection state of the second acquisition unit 102 and the fusion module 300.
- the switch unit 105 turns on the connection between the second acquisition unit 102 and the registration module 200, and the second acquisition unit 102 outputs the second two-dimensional image to the registration module 200; when the device needs to perform fusion processing
- the switch unit 105 turns on the connection between the second acquisition unit 102 and the fusion module 300
- the second acquisition unit 102 outputs a three-dimensional image to the fusion module 300; when the device needs to perform registration processing and fusion processing at the same time, turn on the second acquisition unit at the same time.
- the collection unit 102 is connected to the registration module 200 and the fusion module 300.
- the switch unit 105 includes a bidirectional switch K3.
- the static contact of the bidirectional switch K3 is connected to the second acquisition unit 102, the first movable contact of the bidirectional switch K3 is connected to the input terminal of the registration module 200, and the second movable contact of the bidirectional switch K3 is connected to the input terminal of the fusion module 300.
- the first movable contact of the bidirectional switch K3 is closed, and the second acquisition unit 102 outputs a second two-dimensional image to the registration module 200; in the three-dimensional mode, the second movable contact of the bidirectional switch K3 is closed, The second acquisition unit 102 outputs a three-dimensional image to the fusion module 300.
- FIG. 11 shows a flowchart of the image acquisition method provided in this embodiment.
- the image acquisition method includes steps S100, S200, S300, and S400.
- the details are as follows:
- Step S100 Collect a first two-dimensional image, a second two-dimensional image, and a three-dimensional image of the scene; the resolution of the first two-dimensional image is higher than the resolution of the second two-dimensional image.
- Step S200 Perform image registration on the first two-dimensional image and the second two-dimensional image, and obtain a mapping relationship between the first two-dimensional image and the second two-dimensional image.
- Step S300 Perform image fusion on the three-dimensional image and the first two-dimensional image according to the mapping relationship to obtain a fused image.
- Step S400 Up-sampling the fused image to obtain a high-resolution three-dimensional depth image.
- step S100 is executed by the acquisition module in the foregoing embodiment, and images of multiple source channels can be obtained.
- step S100 includes: step S101 and step S102.
- Step S101 Collect image light reflected by the scene to form a first two-dimensional image.
- Step S102 Acquire a second two-dimensional image of the scene in the two-dimensional mode, and collect a three-dimensional image of the scene in the three-dimensional mode. Specifically, step S102 controls the light source to emit the first pulsed light to the scene, and receives the second pulsed light reflected back from the scene, and detects the two-dimensional gray value or depth information according to the ambient light, the first pulsed light and the second pulsed light, and then The second two-dimensional image information is read out according to the two-dimensional gray value, and the three-dimensional image information is read out according to the depth information.
- step S100 further includes: step S103 and step S104.
- Step S103 Perform spectral filtering on the image light reflected by the scene, so that the wavelength of the image light can be selected to realize single-wavelength imaging, or to realize the acquisition of scene color information.
- step S104 the intensity of the image light reflected by the scene is adjusted, so that the collected information is more accurate.
- step S101 is performed by the first collection unit of the above embodiment
- step S102 is performed by the second collection unit of the above embodiment
- step S103 is performed by the filter unit of the above embodiment
- step S104 is performed by the dimming unit of the above embodiment. Implementation, not repeat them here.
- step S200 is performed by the registration module in the above-mentioned embodiment, which can perform relevant registration on images with differences, and obtain the mapping relationship between different images, so that subsequent images are aligned and fused according to the mapping relationship; step S200 For a specific description of, refer to the relevant description of the registration module in the foregoing embodiment.
- step S100 includes: step S201 and step S202.
- Step S201 extracting feature point groups corresponding to the first two-dimensional image and the second two-dimensional image respectively.
- Step S202 Obtain the mapping relationship between the first two-dimensional image and the second two-dimensional image according to the feature point group.
- step S201 is performed by the extraction unit of the above-mentioned embodiment
- step S102 is performed by the mapping unit of the above-mentioned embodiment, which will not be repeated here.
- step S300 is performed by the fusion module in the above-mentioned embodiment, which can fuse images with different resolutions and different image information into a three-dimensional depth image, improve the resolution of the original acquired image, and realize the integration of different feature information. Fusion improves the integrity of the image information; for the specific description of step S300, refer to the related description of the fusion module in the foregoing embodiment.
- step S300 includes: step S301 and step S302.
- Step S301 Obtain a projected transformed image according to the three-dimensional image and the mapping relationship.
- Step S302 fusing the projected transformed image and the first two-dimensional image to obtain a fused image.
- step S301 is performed by the transformation unit of the above-mentioned embodiment
- step S302 is performed by the image fusion unit of the above-mentioned embodiment, which will not be repeated here.
- step S400 is performed by the up-sampling module in the above-mentioned embodiment, which can further improve the resolution of image display and generate a high-resolution three-dimensional depth image; for the specific description of step S400, refer to the up-sampling module in the above-mentioned embodiment The related description of, I won’t repeat it here.
- the image obtaining method may further include step 500 to visually display the three-dimensional depth image.
- Step S500 may be after step S300 or step S400, or with step S300. S300 or step S400 are executed simultaneously.
- the first two-dimensional image, the second two-dimensional image, and the three-dimensional image with different resolutions and different image information are obtained by collecting the scene, and the images with differences are correlated to obtain different images. Then, according to the mapping relationship, images with different resolutions and different image information are fused, and the resolution of the fused image is further improved through upsampling, and finally a high-resolution three-dimensional depth image is obtained, while the color information of the image is realized
- the fusion of different characteristic information such as gray information and depth information improves the integrity of image information.
- steps in the flowchart of FIG. 11 are displayed in sequence as indicated by the arrows, these steps are not necessarily performed in sequence in the order indicated by the arrows. Unless specifically stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least part of the steps in FIG. 11 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. The execution of these sub-steps or stages The sequence is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
- An embodiment of the present application also provides an electronic device, including a memory and a processor, and a computer program is stored in the memory.
- the processor is caused to execute the image acquisition method described in the above embodiment. A step of.
- the embodiment of the present application also provides a computer-readable storage medium.
- One or more non-volatile computer-readable storage media containing computer-executable instructions when the computer-executable instructions are executed by one or more processors, cause the processors to execute the image acquisition method in any of the above embodiments step.
- Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
- Volatile memory may include random access memory (RAM), which acts as external cache memory.
- RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
- SRAM static RAM
- DRAM dynamic RAM
- SDRAM synchronous DRAM
- DDR SDRAM double data rate SDRAM
- ESDRAM enhanced SDRAM
- SLDRAM synchronous Link (Synchlink) DRAM
- Rambus direct RAM
- DRAM direct memory bus dynamic RAM
- RDRAM memory bus dynamic RAM
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
Abstract
An image acquisition apparatus and method, an electronic device, and a computer-readable storage medium. The image acquisition apparatus comprises an acquisition module, a registration module, a fusion module, and an up-sampling module. The acquisition module acquires scenes so as to obtain a first two-dimensional image, a second two-dimensional image and a three-dimensional image that have different resolutions and different image information. The registration module performs relevant registration on the images among which there are differences, and obtains a mapping relationship between the different images, so that according to said mapping relationship, the fusion module fuses the images that have different resolutions and different image information. Moreover, the resolution of the fused image is further improved by means of the up-sampling module, so that the image acquisition apparatus obtains a high-resolution three-dimensional depth image, and at the same time achieves the fusion of different feature information such as color information, grayscale information and depth information of the images, thereby improving the integrity of image information.
Description
本发明涉及成像技术领域,特别是涉及一种图像获取装置和方法、电子设备、计算机可读存储介质。The present invention relates to the field of imaging technology, in particular to an image acquisition device and method, electronic equipment, and computer-readable storage medium.
随着数码技术、半导体制造技术以及网络的迅速发展,传统图像传感器的二维成像技术日益成熟,其采集到的图像可以是色彩模式也可以是灰度模式,且具有很高的分辨率。但是,其采集到的图像信息在一定程度上并不完整,比如缺乏深度信息。With the rapid development of digital technology, semiconductor manufacturing technology, and the Internet, the two-dimensional imaging technology of traditional image sensors has become increasingly mature, and the images collected can be in color mode or grayscale mode, and have high resolution. However, the collected image information is incomplete to a certain extent, such as lack of depth information.
而随着科学技术的发展,越来越多的行业领域已不再满足二维显示提供的平面信息,要求真实的反映三维的实际世界。因而既能输出三维图像信息,又能输出二维图像信息的深度传感器的应用越来越多。但是,相较于传统图像传感器,深度传感器采集的图像,图像的分辨率低、色彩模式受限,并不能满足用户的需求。With the development of science and technology, more and more industry fields no longer satisfy the plane information provided by the two-dimensional display and require a true reflection of the actual three-dimensional world. Therefore, there are more and more applications of depth sensors that can output both three-dimensional image information and two-dimensional image information. However, compared with the traditional image sensor, the image collected by the depth sensor has low resolution and limited color mode, which cannot meet the needs of users.
因此,示例性的图像获取方式满足不了用户高分辨图像同时图像信息完整的需求。Therefore, the exemplary image acquisition method cannot meet the user's needs for high-resolution images and complete image information.
发明内容Summary of the invention
基于此,有必要提供一种能够同时提高图像分辨率及图像信息完整性的图像获取装置和方法、电子设备、计算机可读存储介质。Based on this, it is necessary to provide an image acquisition device and method, electronic equipment, and computer-readable storage medium that can simultaneously improve image resolution and image information integrity.
为了实现本发明的目的,本发明采用如下技术方案:In order to achieve the purpose of the present invention, the present invention adopts the following technical solutions:
一种图像获取装置,包括:An image acquisition device, including:
采集模块,被配置为采集场景的第一二维图像、第二二维图像以及三维图像;所述第一二维图像的分辨率高于所述第二二维图像的分辨率;An acquisition module configured to acquire a first two-dimensional image, a second two-dimensional image, and a three-dimensional image of the scene; the resolution of the first two-dimensional image is higher than the resolution of the second two-dimensional image;
配准模块,连接所述采集模块,被配置为将所述第一二维图像和所述第二二维图像进行图像配准,获取所述第一二维图像和所述第二二维图像的映射关系;A registration module, connected to the acquisition module, configured to perform image registration on the first two-dimensional image and the second two-dimensional image, and obtain the first two-dimensional image and the second two-dimensional image The mapping relationship;
融合模块,连接所述采集模块和所述配准模块,被配置为根据所述映射关系将所述三维图像和所述第一二维图像进行图像融合,获得融合图像;A fusion module, which connects the acquisition module and the registration module, and is configured to perform image fusion on the three-dimensional image and the first two-dimensional image according to the mapping relationship to obtain a fused image;
上采样模块,连接所述融合模块,被配置为将所述融合图像进行上采样,获得高分辨率三维深度图像。The up-sampling module is connected to the fusion module and is configured to up-sample the fused image to obtain a high-resolution three-dimensional depth image.
在其中一种实施例中,所述采集模块包括:In one of the embodiments, the collection module includes:
第一采集单元,分别连接所述配准模块和所述融合模块,被配置为采集场景反射的图像光形成第一二维图像;The first collection unit is respectively connected to the registration module and the fusion module, and is configured to collect image light reflected by the scene to form a first two-dimensional image;
第二采集单元,分别连接所述配准模块和所述融合模块,被配置为采集场景的第二二维图像并输出至所述配准模块,采集场景的三维图像并输出至所述融合模块。The second collection unit is respectively connected to the registration module and the fusion module, and is configured to collect a second two-dimensional image of the scene and output to the registration module, and collect the three-dimensional image of the scene and output to the fusion module .
在其中一种实施例中,所述第二采集单元包括:In one of the embodiments, the second collection unit includes:
控制电路,被配置为控制光源向场景发射第一脉冲光;The control circuit is configured to control the light source to emit the first pulsed light to the scene;
探测电路,连接所述控制电路,被配置为接收场景反射回的第二脉冲光,根据第一脉冲光和第二脉冲光探测二维灰度值和/或深度信息;A detection circuit, connected to the control circuit, configured to receive the second pulsed light reflected back from the scene, and detect the two-dimensional gray value and/or depth information according to the first pulsed light and the second pulsed light;
读出电路,分别所述探测电路、所述配准模块以及所述融合模块,被配置为根据所述二维灰度值读出第二二维图像信息,根据所述深度信息读出三维图像信息。The readout circuit, the detection circuit, the registration module, and the fusion module, respectively, are configured to read out the second two-dimensional image information according to the two-dimensional gray value, and read out the three-dimensional image according to the depth information information.
在其中一种实施例中,所述探测电路包括:In one of the embodiments, the detection circuit includes:
光电探测器,连接所述控制电路,被配置为探测场景反射的第二脉冲光并生成触发信号;A photodetector, connected to the control circuit, configured to detect the second pulsed light reflected by the scene and generate a trigger signal;
转换电路,连接所述光电探测器和所述读出电路,被配置为在二维模式下根据触发信号探测二维灰度值,在三维模式下根据所述第一脉冲光和所述第二脉冲光获取距离信息。The conversion circuit is connected to the photodetector and the readout circuit, and is configured to detect the two-dimensional gray value according to the trigger signal in the two-dimensional mode, and according to the first pulsed light and the second pulsed light in the three-dimensional mode. Pulse light obtains distance information.
在其中一种实施例中,所述转换电路包括:In one of the embodiments, the conversion circuit includes:
第一开关、第二开关、计数器、振荡器以及译码器;The first switch, the second switch, the counter, the oscillator and the decoder;
所述第一开关的静态触点连接所述计数器的输入端,所述第一开关的第一动态触点、所述振荡器的第一端以及所述光电探测器的输出端共接,所述第一开关的第二动态触点、所述振荡器的第二端、所述译码器的第一端共接,所述译码器的第二端连接第二开关的静态触点,所述第二开关的动态触点与计数器的输出端共接于所述读出电路的输入端。The static contact of the first switch is connected to the input end of the counter, the first dynamic contact of the first switch, the first end of the oscillator, and the output end of the photodetector are connected in common, so The second dynamic contact of the first switch, the second end of the oscillator, and the first end of the decoder are connected in common, and the second end of the decoder is connected to the static contact of the second switch, The dynamic contact of the second switch and the output end of the counter are commonly connected to the input end of the readout circuit.
在其中一种实施例中,所述采集模块还包括:In one of the embodiments, the collection module further includes:
滤光单元,设置在场景和所述第一采集单元之间,和/或设置在场景和所述第二采集单元之间,被配置为对场景反射光进行光谱滤波。The filter unit is arranged between the scene and the first collection unit, and/or between the scene and the second collection unit, and is configured to perform spectral filtering on the scene reflected light.
在其中一种实施例中,所述采集模块还包括:In one of the embodiments, the collection module further includes:
调光单元,设置在场景和所述第一采集单元之间,和/或设置在场景和所述第二采集单元之间,被配置为对场景反射光进行光强度调整。The dimming unit is arranged between the scene and the first collection unit, and/or between the scene and the second collection unit, and is configured to adjust the light intensity of the scene reflected light.
在其中一种实施例中,所述采集模块还包括:In one of the embodiments, the collection module further includes:
开关单元,分别连接所述第二采集单元、所述配准模块以及所述融合模块,被配置为控制所述第二采集单元与所述配准模块的连接状态,控制所述第二采集单元与所述融合模块的连接状态。The switch unit is respectively connected to the second acquisition unit, the registration module, and the fusion module, and is configured to control the connection state of the second acquisition unit and the registration module, and control the second acquisition unit Connection status with the fusion module.
在其中一种实施例中,所述配准模块包括:In one of the embodiments, the registration module includes:
提取单元,连接所述采集模块,被配置为分别提取所述第一二维图像和所述第二二维图像相对应的特征点组;An extraction unit, connected to the acquisition module, and configured to respectively extract feature point groups corresponding to the first two-dimensional image and the second two-dimensional image;
映射单元,连接所述提取单元和所述融合模块,被配置为根据特征点组获取所述第一二维图像和所述第二二维图像的映射关系。The mapping unit is connected to the extraction unit and the fusion module, and is configured to obtain the mapping relationship between the first two-dimensional image and the second two-dimensional image according to a feature point group.
在其中一种实施例中,所述融合模块包括:In one of the embodiments, the fusion module includes:
变换单元,连接所述采集模块和所述配准模块,被配置为根据所述三维图像和所述映射关系获取投影变换图像;A transformation unit, connected to the acquisition module and the registration module, and configured to acquire a projective transformed image according to the three-dimensional image and the mapping relationship;
图像融合单元,连接所述变换单元和所述上采样模块,被配置为将所述投影变换图像和所述第一二维图像进行融合,获得融合图像。The image fusion unit is connected to the transformation unit and the up-sampling module, and is configured to fuse the projection transformed image and the first two-dimensional image to obtain a fused image.
一种图像获取方法,包括:An image acquisition method, including:
采集场景的第一二维图像、第二二维图像以及三维图像;所述第一二维图像的分辨率高于所述第二二维图像的分辨率;Acquiring a first two-dimensional image, a second two-dimensional image, and a three-dimensional image of the scene; the resolution of the first two-dimensional image is higher than the resolution of the second two-dimensional image;
将所述第一二维图像和所述第二二维图像进行图像配准,获取所述第一二维图像和所述第二二维图像的映射关系;Performing image registration on the first two-dimensional image and the second two-dimensional image, and acquiring a mapping relationship between the first two-dimensional image and the second two-dimensional image;
根据所述映射关系将所述三维图像和所述第一二维图像进行图像融合,获得融合图像;Performing image fusion on the three-dimensional image and the first two-dimensional image according to the mapping relationship to obtain a fused image;
将所述融合图像进行上采样,获得高分辨率三维深度图像。Up-sampling the fused image to obtain a high-resolution three-dimensional depth image.
一种电子设备,包括存储器及处理器,所述存储器中储存有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器执行如上所述的图像获取方法的步骤。An electronic device includes a memory and a processor, and a computer program is stored in the memory. When the computer program is executed by the processor, the processor executes the steps of the image acquisition method described above.
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的图像获取方法的步骤。A computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the steps of the image acquisition method described above are realized.
上述图像获取装置和图像获取方法,通过对场景进行采集以获得具有不同分辨率和不同图像信息的第一二维图像、第二二维图像以及三维图像,继而对存在差异的图像进行相关配准,获取不同图像间的映射关系,根据映射关系将具有不同分辨率及不同图像信息的图像进行融合,并通过上采样进一步提高融合图像的分辨率,从而获得高分辨的三维深度图像,同时实现图像的颜色信息、灰度信息以及深度信息等不同特征信息的融合,提高图像信息的完整性。The above-mentioned image acquisition device and image acquisition method collect a scene to obtain a first two-dimensional image, a second two-dimensional image and a three-dimensional image with different resolutions and different image information, and then perform relevant registration on the different images , Obtain the mapping relationship between different images, merge images with different resolutions and different image information according to the mapping relationship, and further improve the resolution of the fused image through upsampling, so as to obtain a high-resolution three-dimensional depth image, and realize the image The fusion of different characteristic information such as color information, gray information and depth information of the image information improves the integrity of image information.
图1为一实施例中图像获取装置的结构示意图;FIG. 1 is a schematic diagram of the structure of an image acquisition device in an embodiment;
图2为一实施例中配准模块的结构示意图;2 is a schematic diagram of the structure of a registration module in an embodiment;
图3为一实施例中融合模块的结构示意图;Figure 3 is a schematic structural diagram of a fusion module in an embodiment;
图4为一实施例中采集模块的结构示意图;FIG. 4 is a schematic diagram of the structure of the acquisition module in an embodiment;
图5为一实施例中第二采集单元的结构示意图;Figure 5 is a schematic diagram of the structure of a second collection unit in an embodiment;
图6为一实施例中探测电路的结构示意图;FIG. 6 is a schematic diagram of the structure of a detection circuit in an embodiment;
图7为一实施例中转换电路的电路示意图;Fig. 7 is a circuit diagram of a conversion circuit in an embodiment;
图8a-图8d为另一实施例中采集模块的结构示意图;8a-8d are schematic diagrams of the structure of the acquisition module in another embodiment;
图9为另一实施例中采集模块的结构示意图;FIG. 9 is a schematic structural diagram of a collection module in another embodiment;
图10为一实施例中开关单元的电路示意图;FIG. 10 is a schematic circuit diagram of a switch unit in an embodiment;
图11为一实施例中的图像获取方法的流程图。Fig. 11 is a flowchart of an image acquisition method in an embodiment.
为了便于理解本发明,下面将参照相关附图对本发明进行更全面的描述。附图中给出了本发明的较佳的实施例。但是,本发明可以以许多不同的形式来实现,并不限于本文所描述的实施例。相反地,提供这些实施例的目的是使对本发明的公开内容的理解更加透彻全面。In order to facilitate the understanding of the present invention, the present invention will be described more fully below with reference to the relevant drawings. The drawings show preferred embodiments of the present invention. However, the present invention can be implemented in many different forms and is not limited to the embodiments described herein. On the contrary, the purpose of providing these embodiments is to make the understanding of the disclosure of the present invention more thorough and comprehensive.
除非另有定义,本文所使用的所有的技术和科学术语与属于本发明的技术领域的技术人员通常理解的含义相同。本文中在本发明的说明书中所使用的术语只是为了描述具体地实施例的目的,不是旨在于限制本发明。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the technical field of the present invention. The terms used in the specification of the present invention herein are only for the purpose of describing specific embodiments, and are not intended to limit the present invention.
参见图1,图1为一实施例中的图像获取装置的结构示意图。Refer to FIG. 1, which is a schematic structural diagram of an image acquisition device in an embodiment.
在本实施例中,图像获取装置包括采集模块100、配准模块200、融合模块300以及上采样模块400。In this embodiment, the image acquisition device includes an acquisition module 100, a registration module 200, a fusion module 300, and an up-sampling module 400.
采集模块100,被配置为采集场景的第一二维图像、第二二维图像以及三维图像,第一二维图像的分辨率高于第二二维图像的分辨率。The acquisition module 100 is configured to acquire a first two-dimensional image, a second two-dimensional image, and a three-dimensional image of the scene, and the resolution of the first two-dimensional image is higher than the resolution of the second two-dimensional image.
配准模块200,连接采集模块100,被配置为将第一二维图像和第二二维图像进行图像配准,获取第一二维图像和第二二维图像的映射关系。The registration module 200, connected to the acquisition module 100, is configured to perform image registration on the first two-dimensional image and the second two-dimensional image, and obtain the mapping relationship between the first two-dimensional image and the second two-dimensional image.
融合模块300,连接采集模块100和配准模块200,被配置为根据映射关系将三维图像和第一二维图像进行图像融合,获得融合图像。The fusion module 300 is connected to the acquisition module 100 and the registration module 200, and is configured to perform image fusion between the three-dimensional image and the first two-dimensional image according to the mapping relationship to obtain a fused image.
上采样模块400,连接融合模块300,被配置为将融合图像信息进行上采样,获得高分辨率三维深度图像。The up-sampling module 400, connected to the fusion module 300, is configured to up-sample the fused image information to obtain a high-resolution three-dimensional depth image.
在本实施例中,采集模块100融合多种采集单元,可以不同的采集方式从同一位置或者 不同位置对场景进行采集,从而获得第一二维图像、第二二维图像以及三维图像,获得多源信道的图像。In this embodiment, the collection module 100 integrates multiple collection units, and can collect scenes from the same location or different locations in different collection methods, so as to obtain the first two-dimensional image, the second two-dimensional image, and the three-dimensional image. The image of the source channel.
其中,场景为目标采集物,由一个或者多个对象组成,对场景进行采集可以获取与场景对应的图像。Among them, the scene is the target collection object, which is composed of one or more objects, and the image corresponding to the scene can be obtained by collecting the scene.
其中,第一二维图像是仅有场景二维平面信息而没有深度方向信息的图像;第一二维图像可以是RGB色彩模式图像,也可以是灰度模式图像。第一二维图像具有高的分辨率。第二二维图像是指场景的二维灰度图像信息,分辨率低于第一二维图像;三维图像是指具有场景深度方向信息的图像,分辨率低于第一二维图像。三维深度图像是指具有高分辨率图像信息及深度方向信息的图像。其中,第一二维图像、第二二维图像以及三维图像可以分别对应有多幅图像;且第一二维图像、第二二维图像以及三维图像可以同时对应同一场景;或者第二二维图像和某一第一二维图像对应同一场景,三维图像和另外某一第一二维图像对应同一场景。Wherein, the first two-dimensional image is an image with only two-dimensional plane information of the scene but no depth direction information; the first two-dimensional image may be an RGB color mode image or a grayscale mode image. The first two-dimensional image has a high resolution. The second two-dimensional image refers to the two-dimensional grayscale image information of the scene, and the resolution is lower than the first two-dimensional image; the three-dimensional image refers to the image with the depth direction information of the scene, and the resolution is lower than the first two-dimensional image. A three-dimensional depth image refers to an image with high-resolution image information and depth direction information. Among them, the first two-dimensional image, the second two-dimensional image, and the three-dimensional image may respectively correspond to multiple images; and the first two-dimensional image, the second two-dimensional image, and the three-dimensional image may simultaneously correspond to the same scene; or the second two-dimensional image The image and a certain first two-dimensional image correspond to the same scene, and the three-dimensional image and another certain first two-dimensional image correspond to the same scene.
其中,不同的采集方式包括传统的图像传感采集方式和深度传感采集方式。对同一场景进行采集时,采集的视角存在重叠区域,即公共区域,在重叠区域内,第一二维图像、第二二维图像以及三位深度图像具有对应点,例如相对应的特征点。Among them, different acquisition methods include traditional image sensing acquisition methods and depth sensing acquisition methods. When collecting the same scene, there is an overlapping area of the collected perspectives, that is, a common area. In the overlapping area, the first two-dimensional image, the second two-dimensional image, and the three-bit depth image have corresponding points, such as corresponding feature points.
其中,采集模块100在采集同一场景的不同图像时,可以在不同时刻对第一二维图像、第二二维图像以及三维图像进行采集,例如,可以先采集第一二维图像和第二二维图像,由配准模块200获取映射关系后,再采集第一二维图像和三维图像以进行融合处理;也可以同时采集第一二维图像、第二二维图像以及三维图像,由配准模块200获取映射关系,再由融合模块300对第一二维图像和三维图像进行融合处理;或者,在图像获取装置已存储有映射关系时,后续图像处理过程中,采集模块100可以只采集第一二维图像和三维图像,继而直接通过融合模块300对第一二维图像和三维图像进行融合处理,避免第二二维图像重复采集,降低采集图像产生的功耗。Wherein, the collection module 100 can collect the first two-dimensional image, the second two-dimensional image, and the three-dimensional image at different times when collecting different images of the same scene. For example, the first two-dimensional image and the second two-dimensional image can be collected first. After the registration module 200 obtains the mapping relationship, the first two-dimensional image and the three-dimensional image are collected for fusion processing; the first two-dimensional image, the second two-dimensional image, and the three-dimensional image can also be collected at the same time. The module 200 acquires the mapping relationship, and the fusion module 300 performs fusion processing on the first two-dimensional image and the three-dimensional image; or, when the image acquisition device has stored the mapping relationship, the acquisition module 100 may only acquire the first two-dimensional image during subsequent image processing. The first two-dimensional image and the three-dimensional image are then directly fused by the fusion module 300 to perform the fusion processing on the first two-dimensional image and the three-dimensional image, avoiding repeated acquisition of the second two-dimensional image, and reducing the power consumption of the acquired image.
在本实施例中,由于采集模块100融合多种采集单元,获取的第一二维图像、第二二维图像由于被采集的方式、位置、视场角以及分辨率等方面存在差异而使得图像间的信息存在一定程度的区别,配准模块200作为融合模块300的前期处理模块,能够对存在差异的图像进行相关配准,获取不同图像间的映射关系,以使后续图像根据映射关系进行对齐融合。In this embodiment, because the acquisition module 100 integrates multiple acquisition units, the acquired first two-dimensional image and the second two-dimensional image are different due to differences in the acquisition method, location, field of view, and resolution. There is a certain degree of difference between the information between the two, the registration module 200 as the pre-processing module of the fusion module 300 can perform relevant registration on the different images, and obtain the mapping relationship between different images, so that subsequent images are aligned according to the mapping relationship Fusion.
在一实施例中,配准模块200可以通过图像处理及现有的算法运算,获取第一二维图像和第二二维图像的公共区域,在公共区域中提取两图像间的对应区域或者对应特征点并建立坐标系,根据对应区域或者对应特征点的位置坐标计算图像间的映射关系。其中,现有的算 法运算包括Homography矩阵(H矩阵)、插值算法及两者的结合。具体地,请辅助参见图2,配准模块200包括提取单元201和映射单元202。In an embodiment, the registration module 200 can obtain the common area of the first two-dimensional image and the second two-dimensional image through image processing and existing algorithm operations, and extract the corresponding area or correspondence between the two images in the common area. Feature points and establish a coordinate system, and calculate the mapping relationship between images according to the location coordinates of the corresponding area or corresponding feature points. Among them, the existing algorithm operations include Homography matrix (H matrix), interpolation algorithm and the combination of the two. Specifically, referring to FIG. 2 for assistance, the registration module 200 includes an extraction unit 201 and a mapping unit 202.
其中,提取单元201连接采集模块100,被配置为分别提取第一二维图像和第二二维图像相对应的特征点组。可选地,特征点包括边缘、轮廓、曲面上的交叉点和高曲率的点。在一实施例中,提取单元201可以通过现有的算法分别在第一二维图像和第二二维图像的公共区域上提取不同图像间相对应的特征点组。Wherein, the extraction unit 201 is connected to the acquisition module 100 and is configured to extract feature point groups corresponding to the first two-dimensional image and the second two-dimensional image respectively. Optionally, the characteristic points include edges, contours, intersections on curved surfaces, and points of high curvature. In an embodiment, the extracting unit 201 may extract corresponding feature point groups between different images on the common area of the first two-dimensional image and the second two-dimensional image by using existing algorithms.
进一步地,提取单元201可以直接提取两公共区域中各自的高频分量图像,在高频分量图像中提取对应的特征点组,获取映射关系,从而降低计算的复杂度,提高配准速率。具体地,提取单元201包括滤波器和处理器。滤波器被配置为分别将第一二维图像和第二二维图像进行滤波处理,获取第一二维图像高频分量和第二二维图像高频分量;处理器连接滤波器,被配置为分别提取第一二维图像高频分量和第二二维图像高频分量的特征点组。Further, the extraction unit 201 can directly extract the respective high-frequency component images in the two common areas, extract the corresponding feature point group from the high-frequency component images, and obtain the mapping relationship, thereby reducing the computational complexity and improving the registration rate. Specifically, the extraction unit 201 includes a filter and a processor. The filter is configured to perform filtering processing on the first two-dimensional image and the second two-dimensional image respectively to obtain the first two-dimensional image high-frequency component and the second two-dimensional image high-frequency component; the processor is connected to the filter and is configured to The feature point groups of the first two-dimensional image high-frequency component and the second two-dimensional image high-frequency component are respectively extracted.
其中,映射单元202连接提取单元201和融合模块300,被配置为根据特征点组获取第一二维图像和第二二维图像的映射关系。在一实施例中,映射单元202首先根据离散的特征点组计算出离散的坐标映射表,继而根据离散的坐标映射表以及差值运算生成完整的坐标映射表,获得图像间的映射关系。映射单元202可以是连接提取单元201的数据处理芯片。The mapping unit 202 is connected to the extraction unit 201 and the fusion module 300, and is configured to obtain the mapping relationship between the first two-dimensional image and the second two-dimensional image according to the feature point group. In an embodiment, the mapping unit 202 first calculates a discrete coordinate mapping table based on the discrete feature point group, and then generates a complete coordinate mapping table based on the discrete coordinate mapping table and difference calculation to obtain the mapping relationship between images. The mapping unit 202 may be a data processing chip connected to the extraction unit 201.
需要说明的是,当配准模块200获取映射关系后,可以将映射关系进行存储,以便于后续图像获取装置再次使用时直接利用映射关系进行图像对齐融合。It should be noted that after the registration module 200 acquires the mapping relationship, the mapping relationship may be stored, so that the subsequent image acquisition device can directly use the mapping relationship for image alignment and fusion when it is used again.
在本实施例中,融合模块300能够将采集模块100采集的多个不同的图像进行融合,从而获得融合不同分辨率及不同图像信息的融合图像。其中,融合图像既包括了三维图像信息也包括二维图像信息。因此,通过融合图像300可以提高原有采集图像的分辨率,同时实现图像的颜色信息、灰度信息以及深度信息等不同特征信息的融合,提高图像信息的完整性。In this embodiment, the fusion module 300 can fuse a plurality of different images collected by the collection module 100 to obtain a fusion image with different resolutions and different image information. Among them, the fusion image includes both three-dimensional image information and two-dimensional image information. Therefore, by fusing the image 300, the resolution of the original captured image can be improved, and at the same time, the color information, gray information, and depth information of the image can be merged to improve the integrity of the image information.
在一实施例中,请辅助参见图3,融合模块300包括变换单元301和图像融合单元302。In an embodiment, please refer to FIG. 3 for assistance. The fusion module 300 includes a transformation unit 301 and an image fusion unit 302.
其中,变换单元301连接采集模块100和配准模块200,被配置为根据三维图像和映射关系获取投影变换图像。例如,变换单元301利用插值算法及映射关系对三维图像进行变换,形成投影变换图像。其中,投影变换图像与三维图像具有公共区域图像,其各点的位置坐标与三维图像对应点的位置坐标相同。变换单元301可以是图像处理器。Wherein, the transformation unit 301 is connected to the acquisition module 100 and the registration module 200, and is configured to acquire the projection transformed image according to the three-dimensional image and the mapping relationship. For example, the transform unit 301 transforms the three-dimensional image by using an interpolation algorithm and a mapping relationship to form a projected transformed image. Wherein, the projection transformation image and the three-dimensional image have a common area image, and the position coordinates of each point are the same as the position coordinates of the corresponding points of the three-dimensional image. The transformation unit 301 may be an image processor.
其中,图像融合单元302连接变换单元301和上采样模块400,被配置为将投影变换图像和第一二维图像进行融合,获得融合图像。可选地,图像融合单元302分别获取投影变换图像与三维图像的公共区域,将公共区域两者相加取平均,或者根据需要分区域用不同的权值进行加权合成,得到融合图像;或者通过多分辨塔式图像融合、小波变换以及卡尔曼滤波 等算法对投影变换图像和第一二位图像进行融合。图像融合单元302可以是图像融合器、图像处理器或者融合控制器等。Wherein, the image fusion unit 302 is connected to the transformation unit 301 and the up-sampling module 400, and is configured to fuse the projection transformed image and the first two-dimensional image to obtain a fused image. Optionally, the image fusion unit 302 separately obtains the common areas of the projection transformed image and the three-dimensional image, adds the two common areas and averages them, or divides the areas into different weights for weighted synthesis as needed to obtain the fused image; or Algorithms such as multi-resolution tower image fusion, wavelet transform, and Kalman filtering are used to fuse the projection transformed image and the first two-bit image. The image fusion unit 302 may be an image fusion device, an image processor, a fusion controller, or the like.
在本实施例中,上采样模块400能够将上述实施例中获取的融合图像进行上采样,进一步提高图像显示的分辨率,生成高分辨率的三维深度图像。具体地,上采样模块400可以通过插值算法对融合图像进行放大,提高图像放大效果。其中,插值算法包括但不限于最近邻插法、双线性内插法、三次内插法。In this embodiment, the up-sampling module 400 can up-sample the fused image obtained in the above-mentioned embodiment to further improve the resolution of image display, and generate a high-resolution three-dimensional depth image. Specifically, the up-sampling module 400 may enlarge the fused image through an interpolation algorithm to improve the image enlargement effect. Among them, interpolation algorithms include but are not limited to nearest neighbor interpolation, bilinear interpolation, and cubic interpolation.
需要说明的是,为了方便查看或评价上述实施例获取的三维深度图像,图像获取装置还可以设置图像显示模块以对三维深度图像进行直观显示,或者直接在融合模块300或上采样模块400中加入显示单元,在融合或上采样过程同时对图像进行显示。It should be noted that, in order to facilitate viewing or evaluating the three-dimensional depth images obtained in the foregoing embodiments, the image acquisition device may also be provided with an image display module to visually display the three-dimensional depth images, or directly add to the fusion module 300 or the up-sampling module 400 The display unit displays the image simultaneously during the fusion or up-sampling process.
需要说明的是,上述图像获取装置中各个模块的划分仅用于举例说明,在其他实施例中,可将图像获取装置按照需要划分为不同的模块,以完成上述图像获取装置的全部或部分功能。It should be noted that the division of each module in the image acquisition device described above is only for illustration. In other embodiments, the image acquisition device can be divided into different modules as needed to complete all or part of the functions of the image acquisition device. .
上述图像获取装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。例如,上述的配准模块、融合模块以及上采样模块以硬件形式内嵌于或独立于计算机设备中的处理器中。Each module in the above-mentioned image acquisition device can be implemented in whole or in part by software, hardware and a combination thereof. The above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules. For example, the above-mentioned registration module, fusion module, and up-sampling module are embedded in hardware or independent of the processor in the computer device.
本实施例提供的图像获取装置,包括采集模块、配准模块、融合模块以及上采样模块,采集模块对场景进行采集以获得具有不同分辨率和不同图像信息的第一二维图像、第二二维图像以及三维图像,配准模块对存在差异的图像进行相关配准,获取不同图像间的映射关系,以使融合模块根据映射关系将具有不同分辨率及不同图像信息的图像进行融合,并通过上采样模块进一步提高融合图像的分辨率,使图像获取装置获得高分辨的三维深度图像,同时实现了图像的颜色信息、灰度信息以及深度信息等不同特征信息的融合,提高图像信息的完整性。The image acquisition device provided in this embodiment includes an acquisition module, a registration module, a fusion module, and an up-sampling module. The acquisition module collects a scene to obtain a first two-dimensional image and a second two-dimensional image with different resolutions and different image information. Two-dimensional images and three-dimensional images, the registration module performs relevant registration on the images with differences, and obtains the mapping relationship between different images, so that the fusion module will merge images with different resolutions and different image information according to the mapping relationship, and pass The up-sampling module further improves the resolution of the fused image, enabling the image acquisition device to obtain a high-resolution three-dimensional depth image, and at the same time realizes the fusion of different characteristic information such as color information, gray information and depth information of the image, and improves the integrity of image information .
参见图4,图4为一实施例中的采集模块的结构示意图。Refer to FIG. 4, which is a schematic diagram of the structure of the acquisition module in an embodiment.
在本实施例中,采集模块100包括第一采集单元101和第二采集单元102。In this embodiment, the collection module 100 includes a first collection unit 101 and a second collection unit 102.
第一采集单元101,分别连接配准模块200和融合模块300,被配置为采集场景反射的图像光形成第一二维图像。The first collection unit 101 is respectively connected to the registration module 200 and the fusion module 300, and is configured to collect image light reflected by the scene to form a first two-dimensional image.
第二采集单元102,分别连接配准模块200和融合模块300,被配置为采集场景的第二二维图像并输出至所述配准模块,采集场景的三维图像并输出至所述融合模块。The second acquisition unit 102 is respectively connected to the registration module 200 and the fusion module 300, and is configured to collect a second two-dimensional image of the scene and output to the registration module, and collect the three-dimensional image of the scene and output to the fusion module.
在一实施例中,第一采集单元101包括图像传感器,图像传感器的输出端分别连接配准模块200和融合模块300,将场景反射的图像光转换为电信号,获得的电信号可以直接由图 像传感器独立进行数据读取和处理,也可以通过辅助其他电子元件进行数据读取和处理,形成第一二维图像。可选地,图像传感器可以是pn结二极管,CCD(Charge Coupled Device,电荷耦合器件),CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)图像传感器。In an embodiment, the first acquisition unit 101 includes an image sensor, and the output ends of the image sensor are respectively connected to the registration module 200 and the fusion module 300 to convert the image light reflected by the scene into electrical signals. The electrical signals obtained can be directly derived from the image The sensor independently reads and processes data, or can assist other electronic components to read and process data to form the first two-dimensional image. Optionally, the image sensor may be a pn junction diode, a CCD (Charge Coupled Device), CMOS (Complementary Metal Oxide Semiconductor, complementary metal oxide semiconductor) image sensor.
需要说明的是,第一采集单元101采集的图像光,其光源可以是单元的内置光源,也可以是环境光源或者其他外置光源。光源不限于可见光RGB或者红外光。It should be noted that the light source of the image light collected by the first collection unit 101 may be a built-in light source of the unit, an ambient light source or other external light sources. The light source is not limited to visible light RGB or infrared light.
在一实施例中,请辅助参见图5,第二采集单元102包括控制电路1021、探测电路1022以及读出电路1023。In one embodiment, please refer to FIG. 5 for assistance. The second acquisition unit 102 includes a control circuit 1021, a detection circuit 1022, and a readout circuit 1023.
控制电路1021被配置为控制光源向场景发射第一脉冲光。The control circuit 1021 is configured to control the light source to emit the first pulsed light to the scene.
探测电路1022连接控制电路1021,被配置为接收场景反射回的第二脉冲光,根据第一脉冲光和第二脉冲光探测二维灰度值和/或深度信息。The detection circuit 1022 is connected to the control circuit 1021 and is configured to receive the second pulsed light reflected from the scene, and detect the two-dimensional gray value and/or depth information according to the first pulsed light and the second pulsed light.
读出电路1023分别连接探测电路1022、配准模块200以及融合模块300,被配置为根据二维灰度值读出第二二维图像信息,根据深度信息读出三维图像信息。The readout circuit 1023 is respectively connected to the detection circuit 1022, the registration module 200 and the fusion module 300, and is configured to read out the second two-dimensional image information according to the two-dimensional gray value and the three-dimensional image information according to the depth information.
在本实施例中,控制电路1021根据启动指令控制光源向场景发射第一脉冲光,同时控制探测电路1022启动工作并使探测电路1022按照预设方式进行探测。其中,第一脉冲光可以是近红外的连续脉冲。控制电路1021控制光源的发射方向、发射时间和发射强度。在一实施例中,控制电路1021包括一个或多个处理器和存储器,以及光源。In this embodiment, the control circuit 1021 controls the light source to emit the first pulsed light to the scene according to the start instruction, and at the same time controls the detection circuit 1022 to start work and make the detection circuit 1022 perform detection in a preset manner. Wherein, the first pulsed light may be a continuous pulse of near infrared. The control circuit 1021 controls the emission direction, emission time and emission intensity of the light source. In an embodiment, the control circuit 1021 includes one or more processors and memories, and a light source.
在本实施例中,探测电路1022接收场景发射回的第二脉冲光,将光信号转换为电信号,从而根据电信号记录第一脉冲光的发射时间、第二脉冲光的接收时间以及接收脉冲中的光子个数,根据第二脉冲光的光子数量计算获取图像的二维灰度值,根据发射时间和接收时间获取时间间隔,继而获取深度信息。需要说明的是,探测电路1022还可以通过探测环境光获取二维灰度值。In this embodiment, the detection circuit 1022 receives the second pulsed light emitted back from the scene and converts the optical signal into an electrical signal, thereby recording the emission time of the first pulsed light, the receiving time of the second pulsed light, and the received pulse according to the electrical signal According to the number of photons in the second pulsed light, the two-dimensional gray value of the acquired image is calculated according to the number of photons of the second pulsed light, the time interval is acquired according to the emission time and the reception time, and then the depth information is acquired. It should be noted that the detection circuit 1022 can also obtain a two-dimensional gray value by detecting ambient light.
可选地,探测电路1022被配置为具有两种可切换的工作模式,在二维模式下根据第二脉冲光的光子数量计算获取图像的二维灰度值,在三维模式下根据发射时间和接收时间获取时间间隔,获取深度信息,从而根据实际需要切换模式,获得对应的信息。Optionally, the detection circuit 1022 is configured to have two switchable working modes. In the two-dimensional mode, the two-dimensional gray value of the acquired image is calculated according to the number of photons of the second pulsed light, and in the three-dimensional mode according to the emission time and The receiving time obtains the time interval and obtains the depth information, so as to switch the mode according to actual needs and obtain the corresponding information.
在一实施例中,探测电路1022包括光电探测器和转换电路。In an embodiment, the detection circuit 1022 includes a photodetector and a conversion circuit.
具体地,光电探测器被配置为探测场景反射的第二脉冲光并生成触发信号。光电探测器包括SPAD(单光子雪崩二极管,Single-Photon Avalanche Diode),或其它光电探测器。Specifically, the photodetector is configured to detect the second pulsed light reflected by the scene and generate a trigger signal. The photodetector includes SPAD (Single-Photon Avalanche Diode), or other photodetectors.
具体地,转换电路连接光电探测器,被配置为在二维模式下根据触发信号探测二维灰度值,在三维模式下根据第一脉冲光和第二脉冲光获取距离信息。通过转换电路可以选取不同 的模式对应获取灰度信息或者深度信息,实现内部电路的共用,节省电路成本。Specifically, the conversion circuit is connected to the photodetector, and is configured to detect the two-dimensional gray value according to the trigger signal in the two-dimensional mode, and obtain distance information according to the first pulsed light and the second pulsed light in the three-dimensional mode. Through the conversion circuit, different modes can be selected to obtain gray information or depth information correspondingly, so as to realize the sharing of internal circuits and save circuit costs.
其中,光电探测器的数量可以是一个或者多个。当光电探测器的数量为一个时,转换电路对应为一个,则探测电路逐点扫描探测二维灰度值或深度信息,多点探测后获取平面的二维灰度值或深度信息。当光电探测器为阵列式分布时,转换电路对应也呈阵列式分布且一个转换电路对应一个光电探测器(参见图6,以光电探测器为SPAD为例,其中,光电探测器为601,转换电路为602),探测电路同时探测多个点获取平面的二维灰度值或深度信息。The number of photodetectors can be one or more. When the number of photodetectors is one, the conversion circuit corresponds to one, and the detection circuit scans point by point to detect the two-dimensional gray value or depth information, and obtain the two-dimensional gray value or depth information of the plane after multi-point detection. When the photodetectors are distributed in an array, the corresponding conversion circuits are also distributed in an array and one conversion circuit corresponds to one photodetector (see Figure 6, taking the photodetector as SPAD as an example, where the photodetector is 601, and the conversion The circuit is 602), the detection circuit detects multiple points at the same time to obtain the two-dimensional gray value or depth information of the plane.
在一实施例中,请辅助参见图7(以光电探测器包括SPAD且数量为一个为例),转换电路包括第一开关K1、第二开关K2、计数器J1、振荡器Z1以及译码器Y1,第一开关K1的静态触点0连接计数器J1的输入端,第一开关K1的第一动态触点2D、振荡器Z1的第一端以及光电探测器SPAD的输出端共接,第一开关K1的第二动态触点3D、振荡器Z1的第二端、译码器Y1的第一端共接,译码器Y1的第二端连接第二开关K2的静态触点1,第二开关K2的动态触点2与计数器J1的输出端共接于读出电路1023的输入端。In one embodiment, please refer to FIG. 7 (taking the photodetector including SPAD and the number as an example), the conversion circuit includes a first switch K1, a second switch K2, a counter J1, an oscillator Z1, and a decoder Y1 , The static contact 0 of the first switch K1 is connected to the input end of the counter J1, the first dynamic contact 2D of the first switch K1, the first end of the oscillator Z1 and the output end of the photodetector SPAD are connected together, the first switch The second dynamic contact 3D of K1, the second end of the oscillator Z1, and the first end of the decoder Y1 are connected together. The second end of the decoder Y1 is connected to the static contact 1 of the second switch K2. The second switch The dynamic contact 2 of K2 and the output terminal of the counter J1 are connected to the input terminal of the readout circuit 1023 in common.
在二维模式时,第一开关K1的第一动态触点2D闭合,第一开关K1的第二动态触点3D断开,第二开关K2断开,计数器J1受SPAD驱动记录SPAD被触发的次数,从而采集单点二维灰度值,进而通过扫描的方式逐点获取其他点的二维灰度值,最后获取面的图像信息。其中,扫描的方式包括机械扫描、MEMS扫描以及光学相控阵扫描。In the two-dimensional mode, the first dynamic contact 2D of the first switch K1 is closed, the second dynamic contact 3D of the first switch K1 is opened, the second switch K2 is opened, and the counter J1 is driven by SPAD to record the activation of SPAD In order to collect the two-dimensional gray value of a single point, and then obtain the two-dimensional gray value of other points point by point by scanning, and finally obtain the image information of the surface. Among them, scanning methods include mechanical scanning, MEMS scanning and optical phased array scanning.
在三维模式时,第一开关K1的第一动态触点2D断开,第一开关K1的第二动态触点3D闭合,第二开关K2闭合,计数器J1、振荡器Z1以及译码器Y1组合形成TDC(Time-to-Digital Converter,时间数字转化器),计数器J1和译码器Y1受振荡器Z1驱动记录光子的飞行时间,从而获取单点的距离信息,进而通过扫描的方式逐点获取其他点的距离信息,最后获取面的深度信息。In the three-dimensional mode, the first dynamic contact 2D of the first switch K1 is open, the second dynamic contact 3D of the first switch K1 is closed, the second switch K2 is closed, the counter J1, the oscillator Z1 and the decoder Y1 are combined A TDC (Time-to-Digital Converter) is formed. The counter J1 and the decoder Y1 are driven by the oscillator Z1 to record the flight time of the photon, so as to obtain the distance information of a single point, and then obtain it point by point by scanning The distance information of other points, and finally the depth information of the surface is obtained.
在另一实施例中,探测电路1022也可以被配置为同时采集两种信息的功能,即根据触发信号探测二维灰度值,同时根据所述第一脉冲光和所述第二脉冲光获取距离信息。从而能够同时获取二维灰度值和深度信息,使读出电路1023同时获取第二二维图像信息和三维图像信息。In another embodiment, the detection circuit 1022 can also be configured to collect two kinds of information at the same time, that is, to detect a two-dimensional gray value according to a trigger signal, and to obtain a value according to the first pulsed light and the second pulsed light. Distance information. Therefore, the two-dimensional gray value and the depth information can be obtained at the same time, so that the readout circuit 1023 can obtain the second two-dimensional image information and the three-dimensional image information at the same time.
在本实施例中,读出电路1023根据二维灰度值读出第二二维图像信息,根据深度信息读出三维图像信息。当探测电路1022被配置为具有两种可切换的工作模式时,相应地,读出电路1023在二维模式下读出第二二维图像信息,在三维模式下读出三维图像信息。当探测电路1022被配置为同时采集两种信息时,相应地,读出电路1023同时读出第二二维图像信息和三维图像信息。其中,读出电路1023的具体器件不受限制,只要能够实现数据读取功能即可, 例如可以包括电阻、电容、放大器及采样器等器件的组合。In this embodiment, the readout circuit 1023 reads out the second two-dimensional image information according to the two-dimensional gray value, and reads out the three-dimensional image information according to the depth information. When the detection circuit 1022 is configured to have two switchable working modes, correspondingly, the readout circuit 1023 reads out the second two-dimensional image information in the two-dimensional mode, and reads out the three-dimensional image information in the three-dimensional mode. When the detection circuit 1022 is configured to collect two kinds of information at the same time, correspondingly, the readout circuit 1023 reads out the second two-dimensional image information and the three-dimensional image information at the same time. Among them, the specific device of the readout circuit 1023 is not limited, as long as it can realize the data reading function, for example, it may include a combination of resistors, capacitors, amplifiers, and samplers.
从而,通过控制电路1021、探测电路1022以及读出电路1023,可以实现第二采集单元102在二维模式下采集第二二维图像、在三维模式下采集三维图像的功能,或者实现第二采集单元102同时采集第二二维图像和三维图像的功能。其中,当第二采集单元102为模式切换功能时,采集模块100可以在配准模块200需要配准时,切换为二维模式,只输出第一二维图像和第二二维图像;在融合模块300需要融合时,切换为三维模式,只输出第一二维图像和三维图像,从而可以同时带来功耗、芯片面积以及成本的优化。Thus, through the control circuit 1021, the detection circuit 1022, and the readout circuit 1023, the function of the second acquisition unit 102 to acquire the second two-dimensional image in the two-dimensional mode, the three-dimensional image in the three-dimensional mode, or the second acquisition can be realized. The unit 102 has the function of simultaneously collecting a second two-dimensional image and a three-dimensional image. Wherein, when the second acquisition unit 102 has a mode switching function, the acquisition module 100 can switch to the two-dimensional mode when the registration module 200 requires registration, and only output the first two-dimensional image and the second two-dimensional image; in the fusion module When 300 needs to be fused, it switches to 3D mode and only outputs the first 2D image and 3D image, which can simultaneously optimize power consumption, chip area, and cost.
需要说明的是,上述实施例的第一采集单元101和第二采集单元102可以集成在一多传感器的相机中,例如集成具有图像传感器和深度传感器的相机中;也可以分别设置在两个具有独立传感器及辅助电路的相机中。It should be noted that the first collection unit 101 and the second collection unit 102 of the foregoing embodiment can be integrated in a multi-sensor camera, for example, integrated in a camera with an image sensor and a depth sensor; or they can be respectively disposed in two cameras with multiple sensors. In the camera with independent sensor and auxiliary circuit.
参见图8a-图8d,图8a-图8d为另一实施例中的采集模块的结构示意图。Refer to Fig. 8a-8d, Fig. 8a-8d is a schematic diagram of the structure of the acquisition module in another embodiment.
进一步地,在上述实施例的基础上,为了实现光谱滤波功能,采集模块100还包括滤光单元103,滤光单元103设置在场景和第一采集单元101之间,和/或设置在场景和第二采集单元102之间(图8a以设置在场景与第一采集单元101之间,和设置在场景与第二采集单元102之间为例),被配置为对场景反射的光进行光谱滤波。从而可以对图像光的波长进行选择,实现单一波长成像,或者实现对场景颜色信息的获取。Further, on the basis of the above-mentioned embodiment, in order to realize the spectral filtering function, the collection module 100 further includes a filter unit 103. The filter unit 103 is arranged between the scene and the first acquisition unit 101, and/or is arranged between the scene and the first collection unit 101. Between the second collection units 102 (Figure 8a is set between the scene and the first collection unit 101, and between the scene and the second collection unit 102 as an example), are configured to perform spectral filtering on the light reflected by the scene . In this way, the wavelength of the image light can be selected to realize single-wavelength imaging, or realize the acquisition of scene color information.
滤光单元103可以包括一个或多个滤光片,也可以分别根据第一采集单元101中图像传感器的数量和第二采集单元102中光电探测器的数量设置为滤色器阵列。其中,滤色器阵列分别与图像传感器阵列、光电探测器阵列配准,使得每个滤色器覆盖至少一个图像传感器和/或一个光电探测器,不同器件对应的滤色器的颜色可以相同或不同;或者,每个滤色器可以光学耦合到至少两个图像传感器和/或两个光电探测器。The filter unit 103 may include one or more filters, or may be configured as a color filter array according to the number of image sensors in the first acquisition unit 101 and the number of photodetectors in the second acquisition unit 102, respectively. Wherein, the color filter array is respectively registered with the image sensor array and the photodetector array, so that each color filter covers at least one image sensor and/or one photodetector, and the colors of the color filters corresponding to different devices can be the same or Different; or, each color filter can be optically coupled to at least two image sensors and/or two photodetectors.
进一步地,为了提高图像信息采集的准确度,采集模块100还包括调光单元104,调光单元104设置在场景和第一采集单元101之间,和/或设置在场景和第二采集单元102之间(图8b以设置在场景与第一采集单元101之间,和设置在场景与第二采集单元102之间为例),被配置为对场景反射的光的强度进行调节,从而使采集的信息更加准确。在一实施例中,调光单元104包括一个或者多个闪光灯;或者包括光强调制器。Further, in order to improve the accuracy of image information collection, the collection module 100 further includes a dimming unit 104. The dimming unit 104 is arranged between the scene and the first collection unit 101, and/or between the scene and the second collection unit 102. (Figure 8b takes the setting between the scene and the first collection unit 101, and the setting between the scene and the second collection unit 102 as examples), is configured to adjust the intensity of the light reflected by the scene, so that the collection The information is more accurate. In an embodiment, the dimming unit 104 includes one or more flash lights; or includes a light accentuation damper.
需要说明的是,采集模块100包括滤光单元103和调光单元104中的一种或两种,当采集模块100同时包括滤光单元103和调光单元104时,滤光单元103和调光单元104的相对位置不受限定,可以根据实际情况进行调整。It should be noted that the collection module 100 includes one or two of the filter unit 103 and the dimming unit 104. When the collection module 100 includes both the filter unit 103 and the dimming unit 104, the filter unit 103 and the dimming unit 104 The relative position of the unit 104 is not limited and can be adjusted according to actual conditions.
以第一采集单元101为例进行说明,例如,当滤光单元103和调光单元104处于同一光 轴上时,参见图8c,滤光单元103可以设置在调光单元104和第一采集单元101之间,先进行调光再进行滤光;或者参见图8d,调光单元104设置在滤光单元103和第一采集单元101之间,先进行滤光再进行调光。当滤光单元103和调光单元104不需要处于同一光轴上时,例如调光单元104为闪光灯,则闪光灯可以设置在场景和第一采集单元101之间的任意位置,与滤光单元103的设置位置可以不相关。Take the first collection unit 101 as an example for description. For example, when the filter unit 103 and the dimming unit 104 are on the same optical axis, referring to FIG. 8c, the filter unit 103 may be provided in the dimming unit 104 and the first collection unit. Between 101, dimming is performed first and then filtering; or referring to FIG. 8d, the dimming unit 104 is arranged between the filtering unit 103 and the first collection unit 101, and the dimming is performed first and then dimming. When the filter unit 103 and the dimming unit 104 do not need to be on the same optical axis, for example, the dimming unit 104 is a flash, then the flash can be set anywhere between the scene and the first collection unit 101, and the filter unit 103 The location of the setting can be irrelevant.
参见图9,图9为另一实施例中的采集模块的结构示意图。Refer to FIG. 9, which is a schematic structural diagram of a collection module in another embodiment.
进一步地,在上述实施例的基础上,为了实现图像输出类型的选择功能,进而选择图像处理阶段,采集模块100还包括开关单元105,开关单元105分别连接第二采集单元102、配准模块200以及融合模块300,被配置为控制第二采集单元102与配准模块200的连接状态,控制第二采集单元102与融合模块300的连接状态。Further, on the basis of the above-mentioned embodiment, in order to realize the function of selecting the image output type and then to select the image processing stage, the acquisition module 100 further includes a switch unit 105, which is respectively connected to the second acquisition unit 102 and the registration module 200 And the fusion module 300 is configured to control the connection state of the second acquisition unit 102 and the registration module 200, and control the connection state of the second acquisition unit 102 and the fusion module 300.
当装置需要进行配准处理时,开关单元105导通第二采集单元102与配准模块200的连接,第二采集单元102向配准模块200输出第二二维图像;当装置需要进行融合处理时,开关单元105导通第二采集单元102与融合模块300的连接,第二采集单元102向融合模块300输出三维图像;当装置需要同时进行配准处理和融合处理时,同时导通第二采集单元102与配准模块200、融合模块300的连接。When the device needs to perform registration processing, the switch unit 105 turns on the connection between the second acquisition unit 102 and the registration module 200, and the second acquisition unit 102 outputs the second two-dimensional image to the registration module 200; when the device needs to perform fusion processing When the switch unit 105 turns on the connection between the second acquisition unit 102 and the fusion module 300, the second acquisition unit 102 outputs a three-dimensional image to the fusion module 300; when the device needs to perform registration processing and fusion processing at the same time, turn on the second acquisition unit at the same time. The collection unit 102 is connected to the registration module 200 and the fusion module 300.
在一实施例中,请辅助参见图10,开关单元105包括双向开关K3。In an embodiment, please refer to FIG. 10 for assistance, the switch unit 105 includes a bidirectional switch K3.
双向开关K3的静触点连接第二采集单元102,双向开关K3的第一动触点连接配准模块200的输入端,双向开关K3的第二动触点连接融合模块300的输入端。在二维模式时,双向开关K3的第一动触点闭合,第二采集单元102向配准模块200输出第二二维图像;在三维模式时,双向开关K3的第二动触点闭合,第二采集单元102向融合模块300输出三维图像。The static contact of the bidirectional switch K3 is connected to the second acquisition unit 102, the first movable contact of the bidirectional switch K3 is connected to the input terminal of the registration module 200, and the second movable contact of the bidirectional switch K3 is connected to the input terminal of the fusion module 300. In the two-dimensional mode, the first movable contact of the bidirectional switch K3 is closed, and the second acquisition unit 102 outputs a second two-dimensional image to the registration module 200; in the three-dimensional mode, the second movable contact of the bidirectional switch K3 is closed, The second acquisition unit 102 outputs a three-dimensional image to the fusion module 300.
对应于上述实施例所提供的图像获取装置,图11示出了本实施例提供的图像获取方法的流程图。Corresponding to the image acquisition device provided in the foregoing embodiment, FIG. 11 shows a flowchart of the image acquisition method provided in this embodiment.
在本实施例中,该图像获取方法包括步骤S100、S200、S300以及S400。详述如下:In this embodiment, the image acquisition method includes steps S100, S200, S300, and S400. The details are as follows:
步骤S100,采集场景的第一二维图像、第二二维图像以及三维图像;第一二维图像的分辨率高于第二二维图像的分辨率。Step S100: Collect a first two-dimensional image, a second two-dimensional image, and a three-dimensional image of the scene; the resolution of the first two-dimensional image is higher than the resolution of the second two-dimensional image.
步骤S200,将第一二维图像和第二二维图像进行图像配准,获取第一二维图像和第二二维图像的映射关系。Step S200: Perform image registration on the first two-dimensional image and the second two-dimensional image, and obtain a mapping relationship between the first two-dimensional image and the second two-dimensional image.
步骤S300,根据映射关系将三维图像和第一二维图像进行图像融合,获得融合图像。Step S300: Perform image fusion on the three-dimensional image and the first two-dimensional image according to the mapping relationship to obtain a fused image.
步骤S400,将融合图像进行上采样,获得高分辨率三维深度图像。Step S400: Up-sampling the fused image to obtain a high-resolution three-dimensional depth image.
在本实施例中,步骤S100由上述实施例中采集模块执行,能够获得多源信道的图像,步 骤S100的具体描述参见上述实施例中的采集模块的相关描述。在一个实施例中,步骤S100包括:步骤S101和步骤S102。In this embodiment, step S100 is executed by the acquisition module in the foregoing embodiment, and images of multiple source channels can be obtained. For a specific description of step S100, refer to the relevant description of the acquisition module in the foregoing embodiment. In one embodiment, step S100 includes: step S101 and step S102.
步骤S101,采集场景反射的图像光形成第一二维图像。Step S101: Collect image light reflected by the scene to form a first two-dimensional image.
步骤S102,在二维模式下采集场景的第二二维图像,在三维模式下采集场景的三维图像。具体地,步骤S102控制光源向场景发射第一脉冲光,并接收场景反射回的第二脉冲光,根据环境光,第一脉冲光和第二脉冲光探测二维灰度值或深度信息,继而根据二维灰度值读出第二二维图像信息,根据深度信息读出三维图像信息。Step S102: Acquire a second two-dimensional image of the scene in the two-dimensional mode, and collect a three-dimensional image of the scene in the three-dimensional mode. Specifically, step S102 controls the light source to emit the first pulsed light to the scene, and receives the second pulsed light reflected back from the scene, and detects the two-dimensional gray value or depth information according to the ambient light, the first pulsed light and the second pulsed light, and then The second two-dimensional image information is read out according to the two-dimensional gray value, and the three-dimensional image information is read out according to the depth information.
在另一实施例中,在步骤S102之前,步骤S100还包括:步骤S103和步骤S104。In another embodiment, before step S102, step S100 further includes: step S103 and step S104.
步骤S103,对场景反射的图像光进行光谱滤波,从而可以对图像光的波长进行选择,实现单一波长成像,或者实现对场景颜色信息的获取。Step S103: Perform spectral filtering on the image light reflected by the scene, so that the wavelength of the image light can be selected to realize single-wavelength imaging, or to realize the acquisition of scene color information.
步骤S104,对场景反射的图像光的强度进行调节,从而使采集的信息更加准确。In step S104, the intensity of the image light reflected by the scene is adjusted, so that the collected information is more accurate.
其中,步骤S101由上述实施例的第一采集单元执行,步骤S102由上述实施例的第二采集单元执行,步骤S103由上述实施例的滤光单元执行,步骤S104由上述实施例的调光单元执行,在此不再赘述。Among them, step S101 is performed by the first collection unit of the above embodiment, step S102 is performed by the second collection unit of the above embodiment, step S103 is performed by the filter unit of the above embodiment, and step S104 is performed by the dimming unit of the above embodiment. Implementation, not repeat them here.
在本实施例中,步骤S200由上述实施例中配准模块执行,能够对存在差异的图像进行相关配准,获取不同图像间的映射关系,以使后续图像根据映射关系进行对齐融合;步骤S200的具体描述参见上述实施例中的配准模块的相关描述。在一个实施例中,步骤S100包括:步骤S201和步骤S202。In this embodiment, step S200 is performed by the registration module in the above-mentioned embodiment, which can perform relevant registration on images with differences, and obtain the mapping relationship between different images, so that subsequent images are aligned and fused according to the mapping relationship; step S200 For a specific description of, refer to the relevant description of the registration module in the foregoing embodiment. In one embodiment, step S100 includes: step S201 and step S202.
步骤S201,分别提取第一二维图像和第二二维图像相对应的特征点组。Step S201, extracting feature point groups corresponding to the first two-dimensional image and the second two-dimensional image respectively.
步骤S202,根据特征点组获取第一二维图像和第二二维图像的映射关系。Step S202: Obtain the mapping relationship between the first two-dimensional image and the second two-dimensional image according to the feature point group.
其中,步骤S201由上述实施例的提取单元执行,步骤S102由上述实施例的映射单元执行,在此不再赘述。Among them, step S201 is performed by the extraction unit of the above-mentioned embodiment, and step S102 is performed by the mapping unit of the above-mentioned embodiment, which will not be repeated here.
在本实施例中,步骤S300由上述实施例中融合模块执行,能够使具有不同分辨率及不同图像信息的图像融合成一三维深度图像,提高原有采集图像的分辨率,同时实现不同特征信息的融合,提高图像信息的完整性;步骤S300的具体描述参见上述实施例中的融合模块的相关描述。在一个实施例中,步骤S300包括:步骤S301和步骤S302。In this embodiment, step S300 is performed by the fusion module in the above-mentioned embodiment, which can fuse images with different resolutions and different image information into a three-dimensional depth image, improve the resolution of the original acquired image, and realize the integration of different feature information. Fusion improves the integrity of the image information; for the specific description of step S300, refer to the related description of the fusion module in the foregoing embodiment. In one embodiment, step S300 includes: step S301 and step S302.
步骤S301,根据三维图像和映射关系获取投影变换图像。Step S301: Obtain a projected transformed image according to the three-dimensional image and the mapping relationship.
步骤S302,将投影变换图像和第一二维图像进行融合,获得融合图像。Step S302, fusing the projected transformed image and the first two-dimensional image to obtain a fused image.
其中,步骤S301由上述实施例的变换单元执行,步骤S302由上述实施例的图像融合单元执行,在此不再赘述。Wherein, step S301 is performed by the transformation unit of the above-mentioned embodiment, and step S302 is performed by the image fusion unit of the above-mentioned embodiment, which will not be repeated here.
在本实施例中,步骤S400由上述实施例中上采样模块执行,能够进一步提高图像显示的分辨率,生成高分辨率的三维深度图像;步骤S400的具体描述参见上述实施例中的上采样模块的相关描述,在此不再赘述。In this embodiment, step S400 is performed by the up-sampling module in the above-mentioned embodiment, which can further improve the resolution of image display and generate a high-resolution three-dimensional depth image; for the specific description of step S400, refer to the up-sampling module in the above-mentioned embodiment The related description of, I won’t repeat it here.
需要说明的是,为了方便查看或评价上述实施例获取的三维深度图像,图像获取方法还可以包括步骤500,对三维深度图像进行直观显示,步骤S500可以在步骤S300或者步骤S400之后,或者与步骤S300或步骤S400同时执行。It should be noted that, in order to facilitate viewing or evaluating the three-dimensional depth image obtained in the above-mentioned embodiment, the image obtaining method may further include step 500 to visually display the three-dimensional depth image. Step S500 may be after step S300 or step S400, or with step S300. S300 or step S400 are executed simultaneously.
上述图像获取方法,通过对场景进行采集以获得具有不同分辨率和不同图像信息的第一二维图像、第二二维图像以及三维图像,对存在差异的图像进行相关配准,获取不同图像间的映射关系,继而根据映射关系将具有不同分辨率及不同图像信息的图像进行融合,并通过上采样进一步提高融合图像的分辨率,最终获得高分辨的三维深度图像,同时实现了图像的颜色信息、灰度信息以及深度信息等不同特征信息的融合,提高图像信息的完整性。In the above image acquisition method, the first two-dimensional image, the second two-dimensional image, and the three-dimensional image with different resolutions and different image information are obtained by collecting the scene, and the images with differences are correlated to obtain different images. Then, according to the mapping relationship, images with different resolutions and different image information are fused, and the resolution of the fused image is further improved through upsampling, and finally a high-resolution three-dimensional depth image is obtained, while the color information of the image is realized The fusion of different characteristic information such as gray information and depth information improves the integrity of image information.
应该理解的是,虽然图11的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图11中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flowchart of FIG. 11 are displayed in sequence as indicated by the arrows, these steps are not necessarily performed in sequence in the order indicated by the arrows. Unless specifically stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least part of the steps in FIG. 11 may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. The execution of these sub-steps or stages The sequence is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
本申请实施例还提供了一种电子设备,包括存储器及处理器,存储器中储存有计算机程序,计算机程序被所述处理器执行时,使得所述处理器执行如上实施例所述的图像获取方法的步骤。An embodiment of the present application also provides an electronic device, including a memory and a processor, and a computer program is stored in the memory. When the computer program is executed by the processor, the processor is caused to execute the image acquisition method described in the above embodiment. A step of.
本申请实施例还提供了一种计算机可读存储介质。一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当计算机可执行指令被一个或多个处理器执行时,使得处理器执行上述任一实施例中的图像获取方法的步骤。The embodiment of the present application also provides a computer-readable storage medium. One or more non-volatile computer-readable storage media containing computer-executable instructions, when the computer-executable instructions are executed by one or more processors, cause the processors to execute the image acquisition method in any of the above embodiments step.
本申请所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDR SDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态 RAM(RDRAM)。Any reference to memory, storage, database or other media used in this application may include non-volatile and/or volatile memory. Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory. Volatile memory may include random access memory (RAM), which acts as external cache memory. As an illustration and not a limitation, RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。The technical features of the above-mentioned embodiments can be combined arbitrarily. In order to make the description concise, all possible combinations of the technical features in the above-mentioned embodiments are not described. However, as long as there is no contradiction in the combination of these technical features, All should be considered as the scope of this specification.
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。The above-mentioned embodiments only express several embodiments of the present invention, and the descriptions are more specific and detailed, but they should not be understood as limiting the scope of the invention patent. It should be pointed out that for those of ordinary skill in the art, without departing from the concept of the present invention, several modifications and improvements can be made, and these all fall within the protection scope of the present invention. Therefore, the protection scope of the patent of the present invention should be subject to the appended claims.
Claims (13)
- 一种图像获取装置,其特征在于,包括:An image acquisition device, characterized by comprising:采集模块,被配置为采集场景的第一二维图像、第二二维图像以及三维图像;所述第一二维图像的分辨率高于所述第二二维图像的分辨率;An acquisition module configured to acquire a first two-dimensional image, a second two-dimensional image, and a three-dimensional image of the scene; the resolution of the first two-dimensional image is higher than the resolution of the second two-dimensional image;配准模块,连接所述采集模块,被配置为将所述第一二维图像和所述第二二维图像进行图像配准,获取所述第一二维图像和所述第二二维图像的映射关系;A registration module, connected to the acquisition module, configured to perform image registration on the first two-dimensional image and the second two-dimensional image, and obtain the first two-dimensional image and the second two-dimensional image The mapping relationship;融合模块,连接所述采集模块和所述配准模块,被配置为根据所述映射关系将所述三维图像和所述第一二维图像进行图像融合,获得融合图像;A fusion module, which connects the acquisition module and the registration module, and is configured to perform image fusion on the three-dimensional image and the first two-dimensional image according to the mapping relationship to obtain a fused image;上采样模块,连接所述融合模块,被配置为将所述融合图像进行上采样,获得高分辨率三维深度图像。The up-sampling module is connected to the fusion module and is configured to up-sample the fused image to obtain a high-resolution three-dimensional depth image.
- 根据权利要求1所述的图像获取装置,其特征在于,所述采集模块包括:The image acquisition device according to claim 1, wherein the acquisition module comprises:第一采集单元,分别连接所述配准模块和所述融合模块,被配置为采集场景反射的图像光形成第一二维图像;The first collection unit is respectively connected to the registration module and the fusion module, and is configured to collect image light reflected by the scene to form a first two-dimensional image;第二采集单元,分别连接所述配准模块和所述融合模块,被配置为采集场景的第二二维图像并输出至所述配准模块,采集场景的三维图像并输出至所述融合模块。The second collection unit is respectively connected to the registration module and the fusion module, and is configured to collect a second two-dimensional image of the scene and output to the registration module, and collect the three-dimensional image of the scene and output to the fusion module .
- 根据权利要求2所述的图像获取装置,其特征在于,所述第二采集单元包括:The image acquisition device according to claim 2, wherein the second acquisition unit comprises:控制电路,被配置为控制光源向场景发射第一脉冲光;The control circuit is configured to control the light source to emit the first pulsed light to the scene;探测电路,连接所述控制电路,被配置为接收场景反射回的第二脉冲光,根据第一脉冲光和第二脉冲光探测二维灰度值和/或深度信息;A detection circuit, connected to the control circuit, configured to receive the second pulsed light reflected back from the scene, and detect the two-dimensional gray value and/or depth information according to the first pulsed light and the second pulsed light;读出电路,分别所述探测电路、所述配准模块以及所述融合模块,被配置为根据所述二维灰度值读出第二二维图像信息,根据所述深度信息读出三维图像信息。The readout circuit, the detection circuit, the registration module, and the fusion module, respectively, are configured to read out the second two-dimensional image information according to the two-dimensional gray value, and read out the three-dimensional image according to the depth information information.
- 根据权利要求3所述的图像获取装置,其特征在于,所述探测电路包括:The image acquisition device according to claim 3, wherein the detection circuit comprises:光电探测器,连接所述控制电路,被配置为探测场景反射的第二脉冲光并生成触发信号;A photodetector, connected to the control circuit, configured to detect the second pulsed light reflected by the scene and generate a trigger signal;转换电路,连接所述光电探测器和所述读出电路,被配置为在二维模式下根据触发信号探测二维灰度值,在三维模式下根据所述第一脉冲光和所述第二脉冲光获取距离信息。The conversion circuit is connected to the photodetector and the readout circuit, and is configured to detect the two-dimensional gray value according to the trigger signal in the two-dimensional mode, and according to the first pulsed light and the second pulsed light in the three-dimensional mode. Pulse light obtains distance information.
- 根据权利要求4所述的图像获取装置,其特征在于,所述转换电路包括:The image acquisition device according to claim 4, wherein the conversion circuit comprises:第一开关、第二开关、计数器、振荡器以及译码器;The first switch, the second switch, the counter, the oscillator and the decoder;所述第一开关的静态触点连接所述计数器的输入端,所述第一开关的第一动态触点、所述振荡器的第一端以及所述光电探测器的输出端共接,所述第一开关的第二动态触点、所述 振荡器的第二端、所述译码器的第一端共接,所述译码器的第二端连接第二开关的静态触点,所述第二开关的动态触点与计数器的输出端共接于所述读出电路的输入端。The static contact of the first switch is connected to the input end of the counter, the first dynamic contact of the first switch, the first end of the oscillator, and the output end of the photodetector are connected in common, so The second dynamic contact of the first switch, the second end of the oscillator, and the first end of the decoder are connected in common, and the second end of the decoder is connected to the static contact of the second switch, The dynamic contact of the second switch and the output end of the counter are commonly connected to the input end of the readout circuit.
- 根据权利要求2所述的图像获取装置,其特征在于,所述采集模块还包括:The image acquisition device according to claim 2, wherein the acquisition module further comprises:滤光单元,设置在场景和所述第一采集单元之间,和/或设置在场景和所述第二采集单元之间,被配置为对场景反射的光进行光谱滤波。The filter unit is arranged between the scene and the first collection unit, and/or between the scene and the second collection unit, and is configured to perform spectral filtering on light reflected by the scene.
- 根据权利要求2所述的图像获取装置,其特征在于,所述采集模块还包括:The image acquisition device according to claim 2, wherein the acquisition module further comprises:调光单元,设置在场景和所述第一采集单元之间,和/或设置在场景和所述第二采集单元之间,被配置为对场景反射的光进行光强度调节。The dimming unit is arranged between the scene and the first collection unit, and/or between the scene and the second collection unit, and is configured to adjust the light intensity of the light reflected by the scene.
- 根据权利要求2所述的图像获取装置,其特征在于,所述采集模块还包括:The image acquisition device according to claim 2, wherein the acquisition module further comprises:开关单元,分别连接所述第二采集单元、所述配准模块以及所述融合模块,被配置为控制所述第二采集单元与所述配准模块的连接状态,控制所述第二采集单元与所述融合模块的连接状态。The switch unit is respectively connected to the second acquisition unit, the registration module, and the fusion module, and is configured to control the connection state of the second acquisition unit and the registration module, and control the second acquisition unit Connection status with the fusion module.
- 根据权利要求1-8任一项所述的图像获取装置,其特征在于,所述配准模块包括:8. The image acquisition device according to any one of claims 1-8, wherein the registration module comprises:提取单元,连接所述采集模块,被配置为分别提取所述第一二维图像和所述第二二维图像相对应的特征点组;An extraction unit, connected to the acquisition module, and configured to respectively extract feature point groups corresponding to the first two-dimensional image and the second two-dimensional image;映射单元,连接所述提取单元和所述融合模块,被配置为根据特征点组获取所述第一二维图像和所述第二二维图像的映射关系。The mapping unit is connected to the extraction unit and the fusion module, and is configured to obtain the mapping relationship between the first two-dimensional image and the second two-dimensional image according to a feature point group.
- 根据权利要求1-8任一项所述的图像获取装置,其特征在于,所述融合模块包括:8. The image acquisition device according to any one of claims 1-8, wherein the fusion module comprises:变换单元,连接所述采集模块和所述配准模块,被配置为根据所述三维图像和所述映射关系获取投影变换图像;A transformation unit, connected to the acquisition module and the registration module, and configured to acquire a projective transformed image according to the three-dimensional image and the mapping relationship;图像融合单元,连接所述变换单元和所述上采样模块,被配置为将所述投影变换图像和所述第一二维图像进行融合,获得融合图像。The image fusion unit is connected to the transformation unit and the up-sampling module, and is configured to fuse the projection transformed image and the first two-dimensional image to obtain a fused image.
- 一种图像获取方法,其特征在于,包括:An image acquisition method, characterized by comprising:采集场景的第一二维图像、第二二维图像以及三维图像;所述第一二维图像的分辨率高于所述第二二维图像的分辨率;Acquiring a first two-dimensional image, a second two-dimensional image, and a three-dimensional image of the scene; the resolution of the first two-dimensional image is higher than the resolution of the second two-dimensional image;将所述第一二维图像和所述第二二维图像进行图像配准,获取所述第一二维图像和所述第二二维图像的映射关系;Performing image registration on the first two-dimensional image and the second two-dimensional image, and acquiring a mapping relationship between the first two-dimensional image and the second two-dimensional image;根据所述映射关系将所述三维图像和所述第一二维图像进行图像融合,获得融合图像;Performing image fusion on the three-dimensional image and the first two-dimensional image according to the mapping relationship to obtain a fused image;将所述融合图像进行上采样,获得高分辨率三维深度图像。Up-sampling the fused image to obtain a high-resolution three-dimensional depth image.
- 一种电子设备,包括存储器及处理器,其特征在于,所述存储器中储存有计算机程 序,所述计算机程序被所述处理器执行时,使得所述处理器执行如权利要求11所述的图像获取方法的步骤。An electronic device comprising a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor executes the image according to claim 11 Get the steps of the method.
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求11所述的图像获取方法的步骤。A computer-readable storage medium having a computer program stored thereon, wherein the computer program implements the steps of the image acquisition method according to claim 11 when the computer program is executed by a processor.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910713220.1A CN110493587B (en) | 2019-08-02 | 2019-08-02 | Image acquisition apparatus and method, electronic device, and computer-readable storage medium |
CN201910713220.1 | 2019-08-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021022696A1 true WO2021022696A1 (en) | 2021-02-11 |
Family
ID=68549288
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/116308 WO2021022696A1 (en) | 2019-08-02 | 2019-11-07 | Image acquisition apparatus and method, electronic device and computer-readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (2) | CN117156114A (en) |
WO (1) | WO2021022696A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115601274A (en) * | 2021-07-07 | 2023-01-13 | 荣耀终端有限公司(Cn) | Image processing method and device and electronic equipment |
CN115641635A (en) * | 2022-11-08 | 2023-01-24 | 北京万里红科技有限公司 | Method for determining focusing parameters of iris image acquisition module and iris focusing equipment |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110996090B (en) * | 2019-12-23 | 2020-12-22 | 上海晨驭信息科技有限公司 | 2D-3D image mixing and splicing system |
CN114355384B (en) * | 2020-07-07 | 2024-01-02 | 柳州阜民科技有限公司 | Time-of-flight TOF system and electronic device |
US11657529B2 (en) * | 2020-10-12 | 2023-05-23 | Black Sesame Technologies Inc. | Multiple camera system with flash for depth map generation |
TWI849346B (en) * | 2020-10-14 | 2024-07-21 | 鈺立微電子股份有限公司 | Image rendering device and image rendering method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1973779A (en) * | 2005-10-14 | 2007-06-06 | 美国西门子医疗解决公司 | Method and system for cardiac imaging and catheter guidance for radio frequency (RF) ablation |
CN103065322A (en) * | 2013-01-10 | 2013-04-24 | 合肥超安医疗科技有限公司 | Two dimensional (2D) and three dimensional (3D) medical image registration method based on double-X-ray imaging |
CN103403763A (en) * | 2011-03-04 | 2013-11-20 | 皇家飞利浦有限公司 | 2d/3d image registration |
CN106934807A (en) * | 2015-12-31 | 2017-07-07 | 深圳迈瑞生物医疗电子股份有限公司 | A kind of medical image analysis method, system and Medical Devices |
WO2019045724A1 (en) * | 2017-08-31 | 2019-03-07 | Sony Mobile Communications Inc. | Methods, devices and computer program products for generating 3d images |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8134637B2 (en) * | 2004-01-28 | 2012-03-13 | Microsoft Corporation | Method and system to increase X-Y resolution in a depth (Z) camera using red, blue, green (RGB) sensing |
CN102222331B (en) * | 2011-05-16 | 2013-09-25 | 付东山 | Dual-flat panel-based two-dimensional to three-dimensional medical image registering method and system |
CN104021548A (en) * | 2014-05-16 | 2014-09-03 | 中国科学院西安光学精密机械研究所 | Method for acquiring scene 4D information |
CN108875565A (en) * | 2018-05-02 | 2018-11-23 | 淘然视界(杭州)科技有限公司 | The recognition methods of railway column, storage medium, electronic equipment, system |
-
2019
- 2019-08-02 CN CN202310817262.6A patent/CN117156114A/en active Pending
- 2019-08-02 CN CN201910713220.1A patent/CN110493587B/en active Active
- 2019-11-07 WO PCT/CN2019/116308 patent/WO2021022696A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1973779A (en) * | 2005-10-14 | 2007-06-06 | 美国西门子医疗解决公司 | Method and system for cardiac imaging and catheter guidance for radio frequency (RF) ablation |
CN103403763A (en) * | 2011-03-04 | 2013-11-20 | 皇家飞利浦有限公司 | 2d/3d image registration |
CN103065322A (en) * | 2013-01-10 | 2013-04-24 | 合肥超安医疗科技有限公司 | Two dimensional (2D) and three dimensional (3D) medical image registration method based on double-X-ray imaging |
CN106934807A (en) * | 2015-12-31 | 2017-07-07 | 深圳迈瑞生物医疗电子股份有限公司 | A kind of medical image analysis method, system and Medical Devices |
WO2019045724A1 (en) * | 2017-08-31 | 2019-03-07 | Sony Mobile Communications Inc. | Methods, devices and computer program products for generating 3d images |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115601274A (en) * | 2021-07-07 | 2023-01-13 | 荣耀终端有限公司(Cn) | Image processing method and device and electronic equipment |
CN115641635A (en) * | 2022-11-08 | 2023-01-24 | 北京万里红科技有限公司 | Method for determining focusing parameters of iris image acquisition module and iris focusing equipment |
CN115641635B (en) * | 2022-11-08 | 2023-04-28 | 北京万里红科技有限公司 | Method for determining focusing parameters of iris image acquisition module and iris focusing equipment |
Also Published As
Publication number | Publication date |
---|---|
CN117156114A (en) | 2023-12-01 |
CN110493587A (en) | 2019-11-22 |
CN110493587B (en) | 2023-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021022696A1 (en) | Image acquisition apparatus and method, electronic device and computer-readable storage medium | |
JP5762211B2 (en) | Image processing apparatus, image processing method, and program | |
US8988317B1 (en) | Depth determination for light field images | |
US8830340B2 (en) | System and method for high performance image processing | |
Chane et al. | Integration of 3D and multispectral data for cultural heritage applications: Survey and perspectives | |
CN105959514B (en) | A kind of weak signal target imaging detection device | |
JP2001194114A (en) | Image processing apparatus and method and program providing medium | |
CN112235522B (en) | Imaging method and imaging system | |
US20110157321A1 (en) | Imaging device, 3d modeling data creation method, and computer-readable recording medium storing programs | |
KR20160124669A (en) | Cmos image sensor for 2d imaging and depth measurement with ambient light rejection | |
WO2019184185A1 (en) | Target image acquisition system and method | |
CN108399596B (en) | Depth image engine and depth image calculation method | |
WO2019184184A1 (en) | Target image acquisition system and method | |
JP7024736B2 (en) | Image processing equipment, image processing method, and program | |
CN107749070A (en) | The acquisition methods and acquisition device of depth information, gesture identification equipment | |
WO2006130734A2 (en) | Method and system to increase x-y resolution in a depth (z) camera using red, blue, green (rgb) sensing | |
US10356384B2 (en) | Image processing apparatus, image capturing apparatus, and storage medium for storing image processing program | |
CN110213491B (en) | Focusing method, device and storage medium | |
CN108881717B (en) | Depth imaging method and system | |
CN109559353A (en) | Camera module scaling method, device, electronic equipment and computer readable storage medium | |
EP4128746A1 (en) | Device and method for depth estimation using color images | |
CN108805921A (en) | Image-taking system and method | |
CN111811431A (en) | Three-dimensional scanner, three-dimensional scanning system and method | |
JP4985264B2 (en) | Object identification device | |
CN112750157B (en) | Depth image generation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19940547 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19940547 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27-09-2022) |