Disclosure of Invention
In order to solve the above technical problem, it is an object of the present invention to provide a multi-view vision adaptive exposure synchronization method, including:
triggering the monocular vision cameras to carry out first exposure, and acquiring high level signals generated by the monocular vision cameras, wherein the high level signal holding time is the exposure time of the second exposure;
and triggering the plurality of monocular vision cameras to carry out the second exposure at the midpoint time of the exposure time of the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronization time of the synchronization signal generated by the FPGA.
By adopting the technical scheme, the synchronization time is the midpoint time of the holding time of the high-level signal in the synchronization signal.
By adopting the technical scheme, before the plurality of monocular vision cameras are triggered to carry out the first exposure, the plurality of monocular vision cameras are configured, and the plurality of monocular vision cameras are set to be in an external trigger mode.
By adopting the technical scheme, the monocular vision cameras are arranged in different directions or in the same direction.
By adopting the technical scheme, after high level signals generated by a plurality of monocular vision cameras are acquired, the image time stamps are recorded, the exposure starting time of the image time stamps is the rising edge time of the high level signals, and the exposure finishing time of the image time stamps is the falling edge time of the high level signals.
By adopting the technical scheme, when the monocular vision cameras are triggered to carry out the first exposure, the image data is obtained after the effective frame data is received.
By adopting the technical scheme, the image data collected by the monocular vision cameras are cached in the FPGA, and the image data are sent to the host through the network interface.
By adopting the technical scheme, the network interface of the FPGA is a GMII interface, and the data transmission is completed by adopting a gigabit Ethernet of a UDP protocol.
By adopting the technical scheme, exposure compensation is carried out on the second exposure by using the exposure time of the second exposure, one half time of the exposure time of the second exposure is used as an exposure compensation parameter for synchronization of the second exposure, and the monocular vision cameras are triggered to carry out the second exposure at the midpoint moment of the exposure time.
It is another object of the present invention to provide a multi-view vision adaptive exposure synchronization system, the system comprising:
the acquisition module is used for acquiring high level signals generated by the monocular vision cameras when the monocular vision cameras are triggered to perform first exposure, and the high level signal holding time is the exposure time of the second exposure;
and the exposure synchronization module is used for triggering the monocular vision cameras to carry out the second exposure when the midpoint time of the exposure time of the second exposure is the same as the synchronization time of the synchronization signal generated by the FPGA. .
Compared with the prior art: according to the invention, the high level signal is obtained when the vision camera is exposed for the first time, the holding time of the high level signal is the exposure time of the second exposure, and the multiple monocular vision cameras are triggered to carry out the second exposure at the midpoint time of the exposure time of the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronous time of the synchronous signal generated by the FPGA, and the exposure synchronization of the monocular vision cameras can be realized according to the change of the position and the scene, therefore, the monocular vision cameras are not limited by the position and the scene, the burst synchronization precision of the monocular vision is obviously improved, and the method is easy to popularize.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In order to more clearly illustrate the multi-view vision adaptive exposure synchronization method and system disclosed by the embodiment of the invention. One of the preferred embodiments of the present invention describes a binocular vision adaptive exposure synchronization method and system in detail, and the binocular vision adaptive exposure synchronization method and system are extended to the synchronization application of the multi-view vision.
The invention discloses a binocular vision self-adaptive exposure synchronization method and system, which can realize self-adaptive exposure synchronization of a binocular vision camera according to the change of positions and scenes, so that the middle points of the exposure time of the binocular vision camera are consistent, and the synchronization precision of the binocular vision can be obviously improved. The following are detailed below.
In order to better understand the binocular vision adaptive exposure synchronization method and system disclosed in the embodiments of the present invention, a structure of an image capturing device to which the embodiments of the present invention are applicable is described below.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an image capturing device according to an embodiment of the present invention. In the configuration shown in fig. 1, the image capturing apparatus includes binocular vision cameras (i.e., two monocular vision cameras 11), an FPGA12, and a host 13, the FPGA12 is connected to the binocular vision cameras through a DVP interface, and the FPGA12 is connected to the host 13 through an ethernet. Where FPGA12 refers to a field programmable gate array; in addition, the binocular vision camera (i.e., the two monocular vision cameras 11) is two CMOS sensor modules which are independently controlled by the FPGA12, and external trigger signals thereof are provided by the FPGA 12. The CMOS sensor module is preferably a monochrome global shutter sensor, and in the mode, all pixels are exposed at the same time, so that the generation of a jelly effect is effectively avoided.
In the image acquisition device shown in fig. 1, when the device is started, the connection between the host 13 and the FPGA12 is established, after the host 13 meets the image receiving condition according to the actual application requirement, a system starting instruction is sent to the FPGA12, the device is initialized, the register of the CMOS sensor module is configured and set to be in an external trigger mode, the FPGA12 generates a trigger signal to trigger the CMOS sensor module to acquire image data, and the acquired binocular real-time image data is transmitted to the host 13. The embodiments of the present invention are not limited.
Based on the image acquisition equipment shown in fig. 1, the embodiment of the invention discloses a binocular vision self-adaptive exposure synchronization method. Referring to fig. 2 and fig. 3, fig. 2 is a schematic flowchart of a binocular vision adaptive exposure synchronization method according to an embodiment of the present invention. The methods described in fig. 2 and 3 may be applied to the image acquisition apparatus shown in fig. 1. The binocular vision self-adaptive exposure synchronization method can comprise the following steps of:
s201, triggering the two monocular vision cameras 11 to perform first exposure, and acquiring high level signals generated by the monocular vision cameras 11, wherein the high level signal holding time is the exposure time of the second exposure.
Illustratively, the binocular vision cameras (i.e., the two monocular vision cameras 11) are located in different directions, for example, the binocular vision cameras are located in two opposite directions of the FPGA12, although the binocular vision cameras may also be in different directions in other forms, and even the binocular vision cameras in the embodiment of the present invention may also be in the same direction, which is not limited by the present invention.
Illustratively, the FPGA12 generates a trigger signal, the two monocular vision cameras 11 perform a first exposure after receiving the trigger signal from the FPGA12, the two monocular vision cameras 11 generate a high level signal, and the high level holding time is the exposure time of the second exposure. Specifically, after the FPGA12 acquires the high level signals generated by the two monocular vision cameras 11, the image time stamp is recorded, the exposure start time of the image time stamp is the high level signal rising edge time, and the exposure end time of the image time stamp is the high level signal falling edge time. For example, the image timestamp accuracy is 10 ns.
Illustratively, when the FPGA12 triggers the first exposure, image data is obtained after receiving valid frame data, the image data is respectively cached in two memories of the FPGA12 and sent to the host 13 through a network interface, the network interface of the FPGA12 is a GMII interface, data transmission is completed by using a gigabit ethernet in a UDP protocol, and an actual data transmission rate is 96 MB/s.
Illustratively, the first exposure of an embodiment of the present invention is considered to be an invalid synchronization, as detailed in the first exposure shown in FIG. 3. In fig. 3, the midpoints of the exposure times of the high-level signals generated by the two monocular vision cameras CAM0 and CAM1 in the first exposure are not aligned, that is, the exposures of CAM0 and CAM1 are not synchronized.
S202, when the midpoint time of the exposure time of the second exposure is reached, the monocular vision cameras are triggered to conduct the second exposure, and the midpoint time of the exposure time of the second exposure is enabled to be consistent with the synchronization time of the synchronization signal generated by the FPGA.
Optionally, the synchronization time is a midpoint time of a holding time of a high-level signal in the synchronization signal.
Illustratively, exposure compensation is performed on the second exposure by using the exposure time of the second exposure, and the output time of the trigger signal triggering the second exposure is obtained. Specifically, one half of the exposure time of the second exposure is used as the exposure compensation parameter for synchronization of the second exposure, and the monocular vision cameras 11 are triggered to perform the second exposure at the midpoint time of the exposure time, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronization time of the synchronization signal. By using the exposure time of the high level signal generated by the last exposure to participate in the next exposure compensation, the adaptive exposure synchronization of the monocular vision cameras 11 is realized, and the alignment of the middle time of the exposure time of the two monocular vision cameras 11 with the synchronization time of the synchronization signal, that is, the exposure midpoint, is ensured, as shown in fig. 4 in detail.
Illustratively, the synchronization signal is a synchronization signal of a synchronization frequency generated by the FPGA12, the synchronization frequency of the synchronization signal corresponds to a synchronization time, the synchronization time is an exposure interval time set by the system, the exposure interval time is an interval time of two adjacent exposures, the synchronization time is a midpoint time of a holding time of a high level signal in the synchronization signal, and when the synchronization time is consistent with the midpoint time of the exposure time of the high level signal generated by the second exposure, the system achieves exposure synchronization, which is shown in fig. 4 in detail.
It can be seen that in the method described in fig. 2, in the embodiment of the present invention, a high level signal is obtained when the vision camera is exposed for the first time, the high level signal holding time is the exposure time of the second exposure, and when the midpoint time of the exposure time of the second exposure is, the multiple monocular vision cameras 11 are triggered to perform the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronization time of the synchronization signal generated by the FPGA, and the exposure synchronization of the binocular vision cameras can be realized according to the change of the position and the scene, so that the binocular vision cameras are not limited by the position and the scene, the burst synchronization accuracy of the binocular vision is significantly improved, and the method is easy to popularize.
Based on the image acquisition equipment shown in fig. 1, the embodiment of the invention discloses a binocular vision self-adaptive exposure synchronization system. Referring to fig. 5, fig. 5 is a schematic diagram illustrating a binocular vision adaptive exposure synchronization system according to an embodiment of the present invention. The binocular vision adaptive exposure synchronization system described in fig. 5 may be applied to an image capturing device, and may include the following modules:
the acquiring module 21 is configured to acquire high level signals generated by the monocular vision cameras when the monocular vision cameras are triggered to perform the first exposure, where a high level signal holding time is an exposure time of the second exposure.
And the exposure synchronization module 22 is configured to trigger the two monocular vision cameras 11 to perform the second exposure at the midpoint time of the exposure time of the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronization time of the synchronization signal generated by the FPGA.
Optionally, the synchronization time is a midpoint time of a holding time of a high-level signal in the synchronization signal.
According to the invention, the high level signal is obtained when the vision camera is exposed for the first time, the holding time of the high level signal is the exposure time of the second exposure, and the plurality of monocular vision cameras 11 are triggered to carry out the second exposure at the midpoint time of the exposure time of the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronous time of the synchronous signal generated by the FPGA, and the exposure synchronization of the binocular vision cameras can be realized according to the change of the position and the scene, therefore, the binocular vision cameras are not limited by the position and the scene, the burst synchronization precision of the binocular vision is obviously improved, and the method is easy to popularize. .
As a possible implementation, the acquiring module 21 includes a recording unit 211, and after the acquiring module 21 acquires the high level signals generated by the monocular vision cameras, the recording unit 211 is configured to record an image timestamp of the image timestamp, where an exposure start time of the image timestamp is a rising edge time of the high level signal, and an exposure end time of the image timestamp is a falling edge time of the high level signal. The details are as above, and the present invention is not described herein.
As a possible implementation, the acquisition module 21 includes an image data acquisition module 212, and the image data acquisition module 212 is used for acquiring image data when the two monocular vision cameras 11 are exposed for the first time.
As a possible implementation, the exposure synchronization module 22 includes an adaptive exposure compensation unit 221, and the adaptive exposure compensation unit 221 is configured to perform exposure compensation on the second exposure using the exposure time of the acquired high-level signal, and obtain an output time of a trigger signal for triggering the second exposure. Further, the adaptive exposure compensation unit 221 is configured to trigger the plurality of monocular vision cameras to perform the second exposure at a midpoint time of the exposure time using a half time of the exposure time of the acquired high-level signal as the exposure compensation parameter of the second exposure synchronization, so that the midpoint time of the exposure time of the second exposure coincides with the synchronization time of the synchronization signal. The details are as above, and the present invention is not described herein.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.