[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113542717A - Camera device with radar function - Google Patents

Camera device with radar function Download PDF

Info

Publication number
CN113542717A
CN113542717A CN202110683254.8A CN202110683254A CN113542717A CN 113542717 A CN113542717 A CN 113542717A CN 202110683254 A CN202110683254 A CN 202110683254A CN 113542717 A CN113542717 A CN 113542717A
Authority
CN
China
Prior art keywords
information
light
radar
laser
rgb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110683254.8A
Other languages
Chinese (zh)
Inventor
黄钰淇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huang Chuzhen
Original Assignee
Huang Chuzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huang Chuzhen filed Critical Huang Chuzhen
Priority to CN202110683254.8A priority Critical patent/CN113542717A/en
Publication of CN113542717A publication Critical patent/CN113542717A/en
Priority to CN202210663006.1A priority patent/CN115499637B/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/183On-screen display [OSD] information, e.g. subtitles or menus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Human Computer Interaction (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention relates to the technical field of unmanned environment perception, and discloses a camera device with a radar function, which comprises a laser light source, an optical modulator, a beam expanding trimmer, an imaging lens, a digital photosensitive unit, a coordinator and a 3D image processor, wherein the laser light source is connected with the optical modulator through the beam expanding trimmer; the imaging lens and the beam expanding trimmer are synchronous in view field, the imaging lens is used for shooting videos on one hand, and is used for shooting reflected light after laser irradiates a target object on the other hand, the digital photosensitive unit is used for recording video information on the one hand, and is used for recording reflected light information on the other hand, namely, a radar function is added on an original camera, or the camera and the radar are combined into a whole on a hardware level, and the camera and the radar can synchronously coordinate through a coordinator, so that the problems that calibration parameters during fusion of image shooting information and radar information in the prior art drift along with time change and the like are solved.

Description

Camera device with radar function
Technical Field
The invention relates to the technical field of unmanned environment perception, in particular to a camera device with a radar function.
Background
In future automatic driving applications including automobiles, a plurality of automation technologies are involved, and among the technologies, the realization of the perception of the environment of a locomotive belongs to one of the most critical technologies, which relates to the perception of environmental 3D position information and the perception of color image information. At present, the 3D perception of the environment is diverged by two routes of a laser radar and a camera, and the industry disputes for years. Because the laser radar (e.g. mechanical radar) and the camera that are mainstream at present are respectively long, in terms of positioning, the laser radar can estimate the precise position and moving speed of an object in a 3D space, but the measuring speed is slow, the equipment is expensive, and the positioning by using the monocular camera is not accurate, because it needs to be converted from 2D to 3D, so that the 3D precise positioning by using the camera is very difficult. Although the binocular camera is more accurate in positioning than the monocular camera, the cost of large computing resources is increased, and the positioning precision of the laser radar cannot be achieved temporarily.
Therefore, the prior art generates a technology of fusion perception of camera video and radar environment 3D position information, and accurately outputs RGB _ D video information with 3D information. The current fusion technology of camera video and radar environment 3D position information belongs to the 'post-fusion technology', namely, video information and 3D information (namely radar point cloud) are respectively obtained by a camera and a radar, and then the two kinds of information are combined by various (calculation) methods to generate RGB _ D information, but the existing fusion perception technology has the following defects:
the fusion of the camera and the radar needs the precise parameter calibration between the camera and the radar, wherein the parameters comprise respective internal parameters and external parameters between the camera and the radar, but the parameter calibration is difficult to be always accurate in practice, even if the camera and the radar are perfectly calibrated in advance, in the practical use, the calibration parameters can also change along with the time change and drift under the influence of factors such as mechanical vibration and heat of a vehicle, and the performance and the reliability of the fusion can be seriously weakened by the parameter drift because most fusion methods of the post-fusion technology are extremely sensitive to calibration errors.
Disclosure of Invention
The purpose of the invention is: the utility model provides a camera device with radar function to solve the not enough that the perception technique that fuses exists among the above-mentioned prior art.
In order to achieve the above object, the present invention provides an image pickup apparatus having a radar function, which includes a laser light source, a light modulator, a beam expander, an imaging lens, a digital light sensing unit, a coordinator, and a 3D image processor;
the light modulator is used for modulating light emitted by the laser light source into corresponding modulated light;
the beam expanding trimmer is used for expanding and trimming the modulated light and irradiating the modulated light into a field range;
the imaging lens is used for converging the image of a target object, the imaging lens and the beam expanding trimmer synchronize a field of view, and the target object is located in the field of view;
the digital photosensitive unit is used for recording laser reflected light information and natural light reflected light information on a target object;
the coordinator is electrically connected with the light modulator, the digital photosensitive unit, the beam expanding trimmer and the imaging lens respectively;
the 3D image processor is used for receiving the video information and the laser reflection light information sent by the digital light sensing unit, calculating the position information of the target object according to the laser reflection light information, and fusing the position information and the video information into 3D color video information.
Furthermore, the camera device further comprises a front filter arranged between the light modulator and the beam expander trimmer and a rear filter arranged between the imaging lens and the digital photosensitive unit.
Furthermore, a pixel photosite and a radar photosite are arranged on the digital photosite.
Furthermore, the time sequence of the video information and the laser reflection light information is in a synchronous or frequency multiplication relationship.
Furthermore, the camera device further comprises a beam splitter, wherein the beam splitter is used for splitting the modulated light modulated by the light modulator into a main emission light beam transmitted to the beam expanding trimmer and a local oscillation light transmitted to the digital photosensitive unit;
the device has a laser holographic working mode, when the device works in the laser holographic working mode, reflected light and local oscillator light are converged on a photosensitive surface of the digital photosensitive unit to form interference fringes, and the interference fringes are superposed with RGB image information from the imaging lens; the digital light sensing unit records pure RGB image information and RGB + interference fringe information at intervals of frames; the 3D image processor acquires image information and RGB + interference fringe information of the digital light sensing unit at intervals of frames, and performs difference calculation on the image information and the RGB + interference fringe information to extract interference fringe information; the 3D image processor calculates a holographic image through interference fringe information, calculates position information according to the holographic image information, and combines the position information with RGB video information to form 3D video information RGB _ D.
Furthermore, the camera device has an indirect TOF radar working mode, when the camera device is in the indirect TOF radar working mode, the pulse laser reflected light is periodically and alternately exposed on the digital light sensing unit (9), the digital light sensing unit (9) also periodically and alternately records pulse laser reflected light information and video information, and the 3D image processor (11) calculates the difference value according to the exposure values of the front frame and the rear frame of the pixel to indirectly obtain the delay time of the laser pulse, further obtain the distance position information between each pixel point and the corresponding target object, and combines the distance position information with the RGB video information to form 3D video information RGB _ D.
Furthermore, the camera device has a direct TOF radar working mode, and when the device is in the direct TOF radar working mode, the radar photosites corresponding to the pixels and the corresponding preprocessing circuit thereof are used for processing the pulse laser reflection light information, calculating the delay time of the laser pulse, further obtaining the distance information of each point of the target object, and combining the distance information with the RGB video information into 3D video information RGB _ D.
Furthermore, the image pickup apparatus further includes a beam splitter configured to split the modulated light modulated by the light modulator into probe light transmitted to the beam expander finisher and reference light transmitted to the single reference light sensing spot on the digital light sensing unit; the single reference light sensing light spot is used for sensing reference light and converting the reference light into a reference electric signal;
the device has an FMCW radar working mode, when the device is in the FMCW radar working mode, the radar photosites and the corresponding preprocessing circuits thereof are used for processing laser reflected light information, the 3D image processor calculates the position information and the speed information of a target object according to reference electric signals and the reflected light information, and combines the position information and the speed information with RGB video information into 3D video information RGB _ D.
Further, the 3D color video information includes formatted additional information having a plurality of fields for recording information related to 3D video recording, the additional information including:
the system comprises satellite positioning information of the position of the camera device, a satellite system name, the moving speed of the camera device, the direction of a main optical axis, the vertical and horizontal angles of a visual field of a video relative to the main optical axis, a focal length, a photosensitive value ISO, an aperture, pulse information, weather information, hardware information of the camera device, a software version number, a manufacturer, an owner and shooting date and time.
Furthermore, the camera device further comprises a controller, wherein the controller is used for replanning an optimal scheme for avoiding interference according to the detected time sequence of the peripheral interference laser and changing the self laser pulse and video exposure time sequence in real time.
Compared with the prior art, the camera device with the radar function provided by the technical scheme has the beneficial effects that:
1. the imaging lens and the field of view of the beam expanding trimmer are synchronous, the imaging lens is used for converging images of a target object, the digital photosensitive unit is used for recording video information on the one hand and recording laser reflection light information on the other hand, which is equivalent to that a radar function is added on an original camera, or the camera and the radar are combined into a whole on a hardware layer, and the camera and the radar can synchronously coordinate through the coordinator, so that the problem that calibration parameters of the camera and the radar in the prior art can drift along with the time change is solved.
2. The device makes the camera and the radar combine into one on the hardware level, and has small volume, low cost, strong function and strong adaptability.
3. Because the camera shooting and the radar are combined into a whole on a hardware level, the 2D video information sent by the digital light sensing unit and the reflected light information of the radar can be highly overlapped and highly fused in time sequence and space, the 3D image processor can output high-precision 3D color video information, and the problems of depth completion and time synchronization when the existing camera shooting video with high resolution and the radar point cloud information with low resolution are fused are solved.
4. The device can be regarded as that the flash radar technology is added on the basis of the common camera, so that the device has the advantages of the flash radar and the advantages of the camera, such as long detection distance, high detection precision, high measurement speed, capability of recording color images and the like.
5. The time control of the device is unified through the coordinator, the work of the camera shooting and the radar is coordinated, the digital photosensitive unit can respectively acquire video information and reflected light information in a frame-spaced mode, and the digital photosensitive unit is used for camera shooting and radar sensitization.
6. The digital light sensing unit can receive reflected light information of the radar of one surface at a time in a frame mode, is high in speed, overcomes the delay problem in the detection speed of the current mainstream laser radar, is convenient to fuse with a video, and is convenient to output a 3D color video RGB _ D.
7. The device can realize zooming, so when being applied to automobile automatic driving, the device can realize the matching and switching of automatic high-speed far vision and low-speed near vision, and can meet the scene requirement of automatic driving more.
Drawings
Fig. 1 is a schematic structural diagram of an image pickup apparatus having a radar function according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a digital light-sensing unit according to a first embodiment of the present invention;
fig. 3 is a schematic structural diagram of an image pickup apparatus having a radar function according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram of the operation of the digital light sensing unit according to the second embodiment of the present invention;
fig. 5 is a schematic structural diagram of an image pickup apparatus having a radar function according to a third embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted" and "connected" are to be interpreted broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example one
As shown in fig. 1 and fig. 2, a first embodiment of the present invention provides an image pickup apparatus with a radar function, which is taken as an example when the image pickup apparatus operates in an indirect TOF radar operating mode, and includes a laser light source 1, an optical modulator 2, a front filter 4, a beam expander trimmer 5, an imaging lens 7, a rear filter 8, a digital light sensing unit 9, a coordinator 10, and a 3D image processor 11;
the light source 1 can be invisible light such as visible light, ultraviolet light, infrared light and the like, coherent light or incoherent light;
the pulse modulator 2 consists of a digital pulse circuit and an optical modulator and aims to change an input continuous light source into a controllable pulse light source for the detection and illumination of the radar;
the beam expanding corrector 5 expands and corrects the modulated pulse parallel light mainly according to the requirement of a video/radar field of view, so that the pulse light just irradiates to the field of view uniformly, the utilization rate of a light source is improved, and the principle and the action of the beam expanding corrector are similar to those of a lens of a flash lamp. The beam expander corrector 5 may also be synchronized with the imaging lens 7 by a coordinator 10.
The imaging lens 7 is similar to a lens composed of lenses in a camera and aims to image a target object 6 in a field of view on a photosensitive surface of a digital photosensitive unit 9; the imaging lens 7 may be synchronized with the beam expander conditioner 5 by a coordinator 10.
The digital light-sensing unit 9 is a CCD sensor or a CMOS sensor, the pixels of which correspond to points on the target 6, and which function: firstly, video recording and secondly, pulse delay signals of laser reflected light can be indirectly converted into electric signals (exposure), and the distance between a point on the target object and a photosensitive surface is calculated by a subsequent calculation circuit;
the 3D image processor 11 is configured to calculate an electrical signal (exposure) corresponding to the collected reflected light pulse signal (i.e., the reflected light information), to obtain distance 3D and speed information of each point of the target object 6, and may be fused with the general video information to form 3D color video information RGBD.
The coordinator 10 is used for coordinating the pulse modulator 2, the digital photosensitive unit 9, the beam expanding trimmer 5 and the imaging lens 7 to enable all the units to work in a coordinated mode;
the receiving principle of the digital light-sensing unit 9 in the device is shown in fig. 2 below.
As shown in fig. 2, the line of the camera represents the received video information, which is in units of one frame, the line at the high level represents the exposure time, the line at the low level represents the non-exposure time, and it is also the time to read the pixel and clear the CCD or CMOS; the line of the reflected pulse is a laser reflection pulse light curve and represents the pulse light reflected from the target 6, the line represents the reflected pulse light at the high position, and the line represents the unreflected pulse light at the low position; the matching of the camera and the pulse light takes 4 video frames as a fusion period, and one fusion period corresponds to 2 light pulse signals in a range shown by a dotted line; when the reflected pulse light has no delay, the exposure starting time of the 2 nd frame and the exposure ending time of the 3 rd frame of the camera in one fusion period respectively correspond to the starting time of the 1 st light pulse and the ending time of the 2 nd pulse in the fusion period; assuming that the laser reflection light pulse on a certain pixel has a time delay of Δ t due to the distance between the corresponding point on the target object 6 and the photosensitive surface (i.e. the flying distance of the laser), the laser pulse width is much smaller than the exposure time of each frame of the camera, so the exposure of the laser pulse 1 in the exposure time of the 2 nd frame of video is not affected, but the effective exposure time of the laser pulse 2 in the exposure time of the 3 rd frame of video is affected. In a fusion period, the camera only transmits the first 3 frames of data, and abandons the 4 th frame of data.
When the device works, light received by the camera has passive natural light and active laser pulse reflected light, the natural light is shooting illumination light in the first frame of video exposure time in the fusion period, and is radar background noise light (which needs to be filtered out in subsequent noise reduction calculation) in the second and third frame of video exposure time.
Since the exposure time of each frame of the video of the camera is short (1/120 seconds or less), it can be considered that the radar background noise illumination intensity Ko and the reflected light pulse intensity Kf are averaged in one fusion period. To ensure that the laser reflection pulse 1 of the radar is always within the exposure time of the 2 nd frame, the laser pulse width Tm is generally smaller than (video exposure time Tz per frame)/2:
exposure of 1 st frame on CCD or CMOS: m1 koxtz (1)
Exposure amount of the 2 nd frame: m2 ═ Ko × Tz + Kf × Tm (2)
Exposure amount of frame 3: m3 ═ Ko × Tz + Kf × (Tm- Δ t) (3)
Δ t can be found by the above equation 3 as: Δ t ═ Tm × (M2-M3)/(M2-M1)
Because the light speed C is fixed, the distance between the target object points corresponding to the pixel point is: l ═ CxDeltat/2
Because the CCD or the CMOS is an RGB three-channel, the calculation distances (Lr, Lg and Lb) of the three channels can be respectively calculated, and then the average distance Lo is calculated:
Lo=(Lr+Lg+Lb)/3
from the distances Lo1, Lo2 of two adjacent fusion cycles and the time To of the fusion cycle, the velocity Vo of the target object (point) relative To the camera can be found:
Vo=(Lo2-Lo1)/To
since the first frame of video in the fusion period is not affected by the reflected pulse, it represents the RGB values of the camera original image.
In addition, when the image pickup device with the radar function works in the direct TOF radar mode, the radar receiving corresponding to each pixel point is borne by an independent radar photosite and a corresponding preprocessing circuit, and the direct TOF radar information of the pixel is processed independently. Its solitary radar sensitization and corresponding preprocessing circuit, when handling the direct TOF radar information of this pixel alone, can adopt photoelectric avalanche diode (APD), adopt integrating circuit to calculate pulse delay time and read with the mode of frame, adopt interframe difference value to reduce background noise in order to improve the sensitivity, its theory of operation is the same with ordinary TOF radar laser ranging's theory of operation, and the difference is: (1) the total number of the radar photosites is more (the same as the pixels) and the radar photosites are integrated into one chip; (2) TOF data is read in frames.
Example two
As shown in fig. 3, the second embodiment of the present invention provides another image capturing apparatus with radar function, which is more practical than the first embodiment, for example, when it operates in the indirect TOF radar operation mode. The system comprises 3 digital photosensitive units 9 which are respectively a CCD1, a CCD2 and a CCD3, and further comprises a laser light source 1, a light modulator 2, a front filter 4, a light source beam expanding trimmer 5, a main light path imaging lens 7, a reference light path imaging lens 14, a 45-degree optical filter 13, a coordinator 10, a computer 3D image processor 11, a 0-degree optical filter 12 of a main light path and a 0-degree optical filter 12 of a reference light path;
in order to facilitate the light splitting of the 45-degree optical filter 13, the light source 1 can adopt red light laser or infrared laser;
the pulse modulator 2 consists of a digital pulse circuit and an optical modulator and aims to change an input continuous light source into a controllable pulse light source for radar detection illumination;
the light source 1 and the pulse modulator 2 are replaced by a Q-switched laser light source in this example, which can control the width and duty ratio of laser pulses and change the pulse peak power.
The beam expanding corrector 5 expands and repairs the modulated parallel pulse light mainly according to the requirement of a video/radar field of view, so that the pulse light just irradiates to the field of view uniformly, the utilization rate of a light source is improved, and the principle and the action of the beam expanding corrector are the same as those of a lens of a flash lamp. The beam expander conditioner 5 may also be synchronized with the imaging lenses 7, 14 by the coordinator 10.
The main light path imaging lens 7 and the reference light path imaging lens 14 are lenses similar to a camera and composed of lenses, and aim to image the target object 6 in the field of view on the photosensitive surface of the digital photosensitive unit 9; the imaging lens 7 may be synchronized with the beam expander conditioner 5 by a coordinator 10.
The pixels of the digital light sensing unit CCD1 and the digital light sensing unit CCD2 correspond to points on the target 6, and the relationship and function are: (1) the number of the pixels and the spatial arrangement of the pixels are the same; (2) the latter only records video; (3) the former can indirectly convert the pulse delay signal of the reflected light into an electric signal (exposure), and the subsequent calculation circuit calculates the distance between the target point and the photosensitive surface;
the digital photosensitive unit CCD3 is mainly used to detect whether there is any other interfering radar laser in the field of view, if so, it cannot be calculated by the computer 3D image processor 11, and accordingly, the positions of the laser pulse and exposure period of the device on the time axis are shifted to avoid the interference.
The computer 3D image processor 11, which functions: the first is to calculate the (exposure) electric signal corresponding to the collected reflected light pulse signal from the digital photoreceptor unit CCD1 to obtain the distance 3D and speed information of each point of the object 6, and to fuse the distance 3D and speed information with the common video information from the digital photoreceptor unit CCD2 to form 3D color video information RGB _ D. Secondly, the detection signal from the digital photosensitive unit CCD3 is calculated to obtain: if there are other radar interfering lasers present, the pulse (timing) position that does not collide with other laser sources in the field of view is calculated and fed back to the coordinator 10.
The coordinator 10 is used for coordinating the pulse modulator 2, the CCD1, the CCD2, the CCD3, the beam expander finisher 5, the main light path imaging lens 7 and the reference light path imaging lens 14 to enable the units to work in coordination;
the receiving principle of the digital light sensing unit CCD1 in the device is shown in FIG. 3.
As shown in fig. 4, the upper solid line is a camera CCD or CMOS exposure curve, which takes one frame as a unit, the line at the high level represents the exposure time, the line at the low level represents the non-exposure time, and is also the time to read the pixel and clear the CCD or CMOS; the lower solid line is a reflected pulsed light curve and represents pulsed light reflected from the object 6, the line represents reflected pulsed light at the high level, and the line represents unreflected pulsed light at the low level; the coordination of the camera CCD1 and the pulse light takes 3 video frames as a fusion period, and one fusion period corresponds to 3 light pulse signals as the range shown by a dotted line; when the reflected pulse light has no delay, the exposure starting time of the 2 nd frame and the exposure starting time of the 3 rd frame of the camera in one fusion period respectively correspond to the ending time of the 2 nd light pulse and the starting time of the 3 rd pulse in the fusion period; assuming that the laser reflection light pulse on a certain pixel has a time delay of Δ t due to the distance between the corresponding point on the target object 6 and the photosensitive surface (i.e. the flying distance of the laser), the laser pulse width is less than the exposure time of each frame of the camera, so the exposure of the laser pulse 3 in the 3 rd frame video exposure time is not affected, but the effective exposure time of the laser pulse 2 in the 2 nd frame video exposure time is affected. During one fusion cycle, the camera CCD1 records 3 frames of data.
When the device works, reflected light from the main optical path imaging lens 7 comprises passive natural light and active laser pulse light, the reflected light is divided into two paths after passing through the 45-degree optical filter 13, one path is laser pulse and is directly transmitted, and the two paths are further filtered by the 0-degree optical filter 12 and reach the digital photosensitive unit CCD1 to read radar pulse information. The other path is natural light, which enters the digital photosensitive unit CCD2 after being reflected by the 45-degree optical filter 13, and RGB video information is read. The reflected light from the reference light path imaging lens 14 is filtered by another 0-degree filter 12 to remove natural light, and then enters the digital photosensitive cell CCD3 to read the interference laser information.
The design ensures that the CCD1 has the same optical path, the same imaging, the same size and the same pixels (number and position) as the CCD 2. The different places are: (1) the CCD2 is used to read video information, and has no special requirement in performance, and is only required to operate at a specific frame rate, such as 60fps (in this case, the exposure time is about 1/60 seconds, about 16 ms). (2) The CCD1 needs to be specially modified and designed on the basis of a common CCD so as to ensure that: firstly, the device can work under the triple frame rate (such as 180fps) of the CCD2, the exposure time is as small as 1 us-2 us, and the exposure starting time can be adjusted to ensure the requirement of the figure 4 is met. Secondly, each pixel does not need three RGB photosites, and only needs to be reserved. Thirdly, the distance is expressed by a circuit with rich expression capability (for example, 16777216 expressions are available for 24-bit RGB) of the original color (generally, 16 bits are enough).
The CCD3 pixels can be very low (16 pixels black and white is sufficient), but must be able to operate at 1Mfps frame rate.
As shown in fig. 4, the laser pulses are designed to have a constant frequency and width (so that the output pulses have the same energy density), because the video frames of CCD1 are exposed very short (about 1/180 seconds) apart, it can be considered that the CCD1 background noise illumination intensity Ko and the reflected light pulse intensity Kf are averaged over one fusion period.
To ensure that the laser reflection pulse 3 of the CCD1 is always within the exposure time of frame 3, the laser pulse width Tm is generally less than (video exposure time per frame Tz)/2:
in order to ensure the effective detection distance of the radar, the laser pulse width has certain requirements. If the detection distance is 250m, the pulse width Tm is more than 250m/(300000000m/s) ═ 833ns
Exposure of 1 st frame on CCD 1: m1 koxtz (1)
Exposure amount of the 2 nd frame: m2 ═ Ko × Tz + Kf × Δ t (2)
Exposure amount of frame 3: m3 ═ Ko × Tz + Kf × Tm (3)
Δ t can be found by the above equation 3 as: Δ t ═ Tm × (M2-M1)/(M3-M1)
Because the light speed C is fixed, the distance L from the target object point corresponding to the pixel point is:
L=C×Δt/2
from the distances L1, L2 of two adjacent fusion cycles and the time To of the fusion cycle, the velocity Vo of the target object (point) relative To the camera can be found:
Vo=(L2-L1)/To
since the first frame of video in the fusion period is not affected by the reflected pulse, it represents noise on CCD1 due to "veiling glare", and the above calculations have eliminated the noise. Thereby improving the sensitivity of the radar.
The CCD1 outputs a position "frame" message that exactly matches the video frame of the CCD2 for one fusion cycle.
The CCD3 is a low-pixel monochrome general camera with pulse width Tm as frame exposure time, and the working frame rate is high, reaching 1Mfps, but it only needs to detect the presence of interfering pulsed laser light (monochrome).
To facilitate the detection of the CCD3, the actual operating frame rate of the CCD1 is increased by three [ i.e., (180+3) fps ], and the increased three frames are used for the detection of the CCD3 (at this time, the CCD1 is not exposed), which are the detection periods of the CCD 3. During this period, the local laser changes from pulsed to steady-mean (0 or some constant). At this time, if no other interfering laser light exists, the CCD3 receives an average exposure per frame during the period; if there are other interfering pulsed lasers in the field of view, the CCD3 detects that it is not an average value, and the computer 3D image processor 11 can determine whether there is interference of other interfering lasers with the machine and how to avoid the interference of other interfering lasers (by misplacing the laser pulses and the exposure position).
In the CCD1 of this embodiment, assuming that the pulse width is 1us and the "normal" frame period of the exposure time 2us, 183fps is 1/183s ≈ 5000us, the "duty ratio" of the frame exposure of the CCD1 is 2 us: 5000us ≈ 1: 2500, i.e. theoretically 2500 exposure positions, can be selected, and the selected space can be used to avoid the interference laser of other radars.
The exposure "duty cycle" in CCD1, which is affected by the laser pulse limit duty cycle, tends to fall short of 1: 2500.
the camera device with the radar function of the embodiment has the following advantages:
firstly, 3 CCD combinations are adopted, and the CCD2 with the image function and the CCD1 with the radar function share one imaging lens to ensure the same visual field;
secondly, the distance is expressed by a common circuit for expressing color, and the 1/16777216 precision of the detection distance can be theoretically reached (if the detection distance is 250 meters, the theoretical precision is 0.015 mm:);
and thirdly, the CCD3 is adopted to detect the interference light (single lens) independently and the computer is used for calculation, thereby intelligently avoiding the interference with other (vehicle-mounted laser equipment) lasers.
Fourthly, the novel LED lamp is manufactured by adopting the existing manufacturing technology and is easy to popularize.
Fifthly, the manufacturing cost is expected to be about 5000 yuan, for example, for a current retail Q-switched laser, about 6000 yuan for a us-grade 2K high-definition industrial camera, and additionally, the radar photosensitive chip CCD1 and the CCD3 are added, the whole set of cost is expected to be within 1.5 ten thousand yuan, and the wholesale price after mass production is expected to be within 7000 yuan.
EXAMPLE III
As shown in fig. 5, a third embodiment of the present invention provides another image pickup apparatus with a radar function, which has a laser holographic radar operating mode, and includes a laser light source 1, a light modulator 2, a beam splitter 3, a front filter 4, a light source beam expander trimmer 5, a target object 6, an imaging lens 7, a rear filter 8, a digital light sensing unit 9, a coordinator 10, and a computer 3D image processor 11. The light source is divided into a main emission beam and a local oscillation light by the optical splitter 3, the local oscillation light directly enters the digital photosensitive unit 9, and the main emission beam is reflected by the target object 6 and returns to the digital photosensitive unit 9 to form a coherent fringe with the local oscillation light.
Reflected laser from a target object 6 through an imaging lens 7 and local oscillation light reference pulses from a light splitter 3 are converged on a photosensitive surface of a digital photosensitive unit 9 to form interference fringes, the fringes are superposed in common color image information, and difference calculation is carried out by using images of alternate frames to extract interference fringe information during acquisition; the digital light sensing unit 9 records different images in single and double frames, such as: the single frame records the common color image (no reflection laser and local oscillation light, only common illumination light or natural illumination light), the double frame records the common color image plus the holographic image (black and white interference fringe), and vice versa.
In the computer 3D image processor 11, the interference fringe information obtained by frame differencing is calculated by using a certain algorithm (including but not limited to AI algorithm) to obtain a holographic image, and then the holographic 3D information of the field of view is recovered from the holographic image.
An apparatus having a laser holographic radar mode of operation may also employ the optical path of figure 3. With pulsed light, the operating frame rate of the CCD1 is one more than that of the CCD2, and the extra frame is used for the CCD3 to detect the existence of the interfering pulsed light and avoid it accordingly (the principle can refer to the second application example). At this time, the CCD1 is responsible for holography, and the CCD2 is responsible for video shooting.
To sum up, the embodiment of the present invention provides a camera device with radar function, which combines a camera and a light radar together on a hardware level, so that the radar has the same resolution as the video, time synchronization, and data reading in frames, and the camera and radar information can be seamlessly and perfectly fused, thereby overcoming the defect of the fusion of the camera information and radar information on a software level of the "post-fusion technology", acquiring 3D stereoscopic information while acquiring a 2D color video, and outputting a 3D color video RGB _ D for application scenes (such as automatic driving of airplanes and automobiles), and having the advantages of high detection speed, low equipment cost, small volume, and highly reliable detection quality. The device combines a digital photosensitive unit 9, a field-variable beam expanding trimmer 5, an imaging lens 7, a coordinator 10 for unified command and coordination and the like, so that the camera with the radar function uses a unified set of time control system to coordinate, and the camera and the radar function work in a unified time, unified field of view and unified frame reading mode. And the radar can work in the laser light source mode occasions of different frequency bands to output 3D color video RGB _ D.
In order to facilitate the later AI training by using the recorded 3D color video RGB _ D, the formatted additional information is added to the 3D color video RGB _ D information, and the environmental parameters of video recording are recorded.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and substitutions can be made without departing from the technical principle of the present invention, and these modifications and substitutions should also be regarded as the protection scope of the present invention.

Claims (10)

1. The camera device with the radar function is characterized by comprising a laser light source (1), a light modulator (2), a beam expanding trimmer (5), an imaging lens (7), a digital photosensitive unit (9), a coordinator (10) and a 3D image processor (11);
the light modulator (2) is used for modulating the light emitted by the laser light source (1) into corresponding modulated light;
the beam expanding trimmer (5) is used for expanding and trimming the modulated light and irradiating the modulated light into a field range;
the imaging lens (7) is used for converging an image of a target object (6), the imaging lens (7) and the beam expanding trimmer (5) are in synchronous field of view, and the target object (6) is located in the field of view;
the digital photosensitive unit (9) is used for recording laser reflected light information and natural light reflected light information on the target object (6);
the coordinator (10) is respectively and electrically connected with the light modulator (2), the digital photosensitive unit (9), the beam expanding trimmer (5) and the imaging lens (7);
the 3D image processor (11) is used for receiving the video information and the laser reflection light information sent by the digital photosensitive unit (9), calculating the position information of the target object (6) according to the laser reflection light information, and fusing the position information and the video information into 3D color video information.
2. The image pickup apparatus having a radar function according to claim 1, further comprising a front filter (4) provided between the light modulator (2) and a beam expander finisher (5), and a rear filter (8) provided between the imaging lens (7) and a digital photosensitive unit (9).
3. The imaging apparatus with radar function according to claim 1, wherein a pixel photosite and a radar photosite are provided on the digital photosite (9).
4. The imaging apparatus having a radar function according to claim 1, wherein timing of the video information and the laser reflected light information is in a synchronous or frequency-doubled relationship.
5. The image pickup apparatus having a radar function according to claim 1, further comprising a beam splitter (3), wherein the beam splitter (3) is configured to split the modulated light modulated by the light modulator (2) into a main emission beam transmitted to the beam expander finisher (5) and a local oscillation beam transmitted to the digital photosensitive unit (9);
the device has a laser holographic working mode, when the device works in the laser holographic working mode, reflected light and local oscillator light are converged on a photosensitive surface of a digital photosensitive unit (9) to form interference fringes, and the interference fringes are superposed with RGB image information from an imaging lens (7); the digital photosensitive unit (9) records pure RGB image information and RGB + interference fringe information at intervals of frames; the 3D image processor (11) acquires image information and RGB + interference fringe information of the digital photosensitive unit (9) at intervals of frames, and performs difference calculation on the image information and the RGB + interference fringe information to extract interference fringe information; the 3D image processor (11) calculates a holographic image through interference fringe information, calculates position information according to the holographic image information, and combines the position information with RGB video information into 3D video information RGB _ D.
6. The imaging device with radar function according to claim 1, wherein the device has an indirect TOF radar operation mode, when the device is in the indirect TOF radar operation mode, the pulse laser reflected light is periodically and alternately exposed on the digital light sensing unit (9), the digital light sensing unit (9) also periodically and alternately records pulse laser reflected light information and video information, the 3D image processor (11) calculates the delay time of the laser pulse indirectly according to the difference between the exposure amounts of the previous and subsequent frames of the pixel, and further obtains the distance and position information between each pixel point and the corresponding target object, and combines the distance and position information with the RGB video information to form 3D video information RGB _ D.
7. The imaging apparatus with radar function according to claim 3, wherein the apparatus has a direct TOF radar operation mode, and when the apparatus is in the direct TOF radar operation mode, the radar photosites corresponding to the respective pixels and the corresponding preprocessing circuits thereof are configured to process the pulse laser reflection light information, calculate the delay time of the laser pulse, and further obtain the distance information of the respective points of the target object, and combine the distance information with the RGB video information to form 3D video information RGB _ D.
8. The imaging apparatus with radar function according to claim 3, further comprising a beam splitter (3), wherein the beam splitter (3) is configured to split the modulated light modulated by the light modulator (2) into probe light transmitted to the beam expander conditioner (5) and reference light transmitted to the individual reference light sensing spots on the digital light sensing unit (9); the single reference light sensing light spot is used for sensing reference light and converting the reference light into a reference electric signal;
the device is provided with an FMCW radar working mode, when the device is in the FMCW radar working mode, the radar photosites and corresponding preprocessing circuits are used for processing laser reflected light information, and the 3D image processor (11) calculates position information and speed information of a target object (6) according to reference electric signals and the reflected light information and combines the position information and the speed information with RGB video information to form 3D video information RGB _ D.
9. The camera device with radar function as claimed in claim 1, wherein the 3D color video information includes formatted additional information having a plurality of fields for recording information related to 3D video recording, the additional information including:
the system comprises satellite positioning information of the position of the camera device, a satellite system name, the moving speed of the camera device, the direction of a main optical axis, the vertical and horizontal angles of a visual field of a video relative to the main optical axis, a focal length, a photosensitive value ISO, an aperture, pulse information, weather information, hardware information of the camera device, a software version number, a manufacturer, an owner and shooting date and time.
10. The camera apparatus with radar function according to claim 1, further comprising a controller for replanning an optimal solution for avoiding the interference based on the timing of the detected ambient interference laser light, and changing the timing of the laser pulse and the video exposure thereof in real time.
CN202110683254.8A 2021-06-18 2021-06-18 Camera device with radar function Withdrawn CN113542717A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110683254.8A CN113542717A (en) 2021-06-18 2021-06-18 Camera device with radar function
CN202210663006.1A CN115499637B (en) 2021-06-18 2022-06-13 Camera device with radar function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110683254.8A CN113542717A (en) 2021-06-18 2021-06-18 Camera device with radar function

Publications (1)

Publication Number Publication Date
CN113542717A true CN113542717A (en) 2021-10-22

Family

ID=78125251

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110683254.8A Withdrawn CN113542717A (en) 2021-06-18 2021-06-18 Camera device with radar function
CN202210663006.1A Active CN115499637B (en) 2021-06-18 2022-06-13 Camera device with radar function

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210663006.1A Active CN115499637B (en) 2021-06-18 2022-06-13 Camera device with radar function

Country Status (1)

Country Link
CN (2) CN113542717A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118473584B (en) * 2024-07-15 2024-09-24 山东科技大学 Time synchronization method based on time delay design of airborne laser radar system

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004177732A (en) * 2002-11-28 2004-06-24 Keyence Corp Optical measuring device
US9451141B2 (en) * 2014-04-19 2016-09-20 Massachusetts Institute Of Technology Methods and apparatus for demultiplexing illumination
US10237534B2 (en) * 2014-07-07 2019-03-19 Infineon Technologies Ag Imaging device and a method for producing a three-dimensional image of an object
CN106772426B (en) * 2017-01-17 2019-12-10 四川航天系统工程研究所 System for realizing remote laser high-sensitivity single photon imaging
CN107367850B (en) * 2017-05-31 2020-08-21 京东方科技集团股份有限公司 Detection device, detection method, liquid crystal dripping equipment and liquid crystal dripping method
EP3438777B1 (en) * 2017-08-04 2022-05-11 Bayerische Motoren Werke Aktiengesellschaft Method, apparatus and computer program for a vehicle
ES2822915T3 (en) * 2017-12-11 2021-05-05 Uponor Oyj Detection of microscopic objects in fluids
CN108279421B (en) * 2018-01-28 2021-09-28 深圳新亮智能技术有限公司 Time-of-flight camera with high resolution color images
CN109495694B (en) * 2018-11-05 2021-03-05 福瑞泰克智能系统有限公司 RGB-D-based environment sensing method and device
CN209375823U (en) * 2018-12-20 2019-09-10 武汉万集信息技术有限公司 3D camera
CN109827523B (en) * 2019-03-08 2021-01-05 中国科学院光电技术研究所 System error calibration device and method based on interference measurement system of point diffraction wave
CN110874047A (en) * 2019-11-29 2020-03-10 苏州新光维医疗科技有限公司 Method and device for holographing image under endoscope
WO2021114036A1 (en) * 2019-12-09 2021-06-17 南昌欧菲生物识别技术有限公司 Tof camera and electronic device
CN111443356B (en) * 2020-04-15 2022-06-07 北京雷瑟瑞达科技有限公司 Circuit system and equipment based on single optical device and capable of giving consideration to distance sensing and imaging
WO2022061828A1 (en) * 2020-09-27 2022-03-31 华为技术有限公司 Radar detection method and related apparatus
CN112904362A (en) * 2021-01-18 2021-06-04 中山大学 Single photon detection imaging integrated load system and control method

Also Published As

Publication number Publication date
CN115499637B (en) 2024-02-27
CN115499637A (en) 2022-12-20

Similar Documents

Publication Publication Date Title
US11131753B2 (en) Method, apparatus and computer program for a vehicle
US10935371B2 (en) Three-dimensional triangulational scanner with background light cancellation
CN107917701A (en) Measuring method and RGBD camera systems based on active binocular stereo vision
CN105115445A (en) Three-dimensional imaging system and imaging method based on combination of depth camera and binocular vision
EP3614659B1 (en) Image processing method, electronic apparatus, and computer-readable storage medium
CN111727602B (en) Single chip RGB-D camera
US11874374B2 (en) System for characterizing surroundings of a vehicle
WO2020075525A1 (en) Sensor fusion system, synchronization control device, and synchronization control method
CN111123289A (en) Depth measuring device and measuring method
Drulea et al. Omnidirectional stereo vision using fisheye lenses
JP6727539B2 (en) Distance sensor, running body, robot and three-dimensional measuring device
CN209676383U (en) Depth camera mould group, depth camera, mobile terminal and imaging device
CN104251995A (en) Three-dimensional color laser scanning technology
US20240183963A1 (en) 3d vision system with automatically calibrated stereo vision sensors and lidar sensor
CN115499637B (en) Camera device with radar function
Hach et al. A novel RGB-Z camera for high-quality motion picture applications
JP2014130086A (en) Range image sensor, processor and program
Lipnickas et al. A stereovision system for 3-D perception
CN112513670B (en) Distance meter, distance measuring system, distance measuring method and program
US20220244394A1 (en) Movement amount estimation device, movement amount estimation method, movement amount estimation program, and movement amount estimation system
JP6379646B2 (en) Information processing apparatus, measurement method, and program
CN211402712U (en) Laser radar system
CN210225540U (en) Acquisition device for information fusion of double-fisheye panoramic image
CN113099211A (en) Stereoscopic vision data acquisition system and method with time synchronization
WO2024109055A1 (en) Laser radar point cloud processing method and apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20211022

WW01 Invention patent application withdrawn after publication