[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112929568A - Multi-view vision self-adaptive exposure synchronization method and system - Google Patents

Multi-view vision self-adaptive exposure synchronization method and system Download PDF

Info

Publication number
CN112929568A
CN112929568A CN202110136811.4A CN202110136811A CN112929568A CN 112929568 A CN112929568 A CN 112929568A CN 202110136811 A CN202110136811 A CN 202110136811A CN 112929568 A CN112929568 A CN 112929568A
Authority
CN
China
Prior art keywords
exposure
time
synchronization
vision cameras
monocular vision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110136811.4A
Other languages
Chinese (zh)
Inventor
张永瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Jiuwu Interchange Intelligent Technology Co ltd
Original Assignee
Suzhou Jiuwu Interchange Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Jiuwu Interchange Intelligent Technology Co ltd filed Critical Suzhou Jiuwu Interchange Intelligent Technology Co ltd
Priority to CN202110136811.4A priority Critical patent/CN112929568A/en
Publication of CN112929568A publication Critical patent/CN112929568A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/665Control of cameras or camera modules involving internal camera communication with the image sensor, e.g. synchronising or multiplexing SSIS control signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a multi-view vision self-adaptive exposure synchronization method and a system, wherein the method comprises the following steps: when the monocular vision cameras expose for the first time, acquiring high level signals generated by the monocular vision cameras, wherein the high level signal holding time is the exposure time of the second exposure; and triggering the monocular vision cameras to carry out the second exposure at the midpoint time of the exposure time of the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronization time of the synchronization signal. The invention can realize the exposure synchronization of the multi-view vision camera according to the change of the position and the scene, so that the middle points of the exposure time of the multi-view vision camera are consistent, the synchronization precision of the multi-view vision is obviously improved, and the popularization is easy.

Description

Multi-view vision self-adaptive exposure synchronization method and system
Technical Field
The invention relates to the technical field of machine vision, in particular to a multi-view vision self-adaptive exposure synchronization method and system.
Background
Multi-vision is an important research topic in the field of machine vision and is widely applied to various fields; in the field of visual navigation, such as unmanned aerial vehicle visual inertial navigation, AGV visual navigation and the like; in the field of industrial measurement, the device is used for target detection and tracking; in the field of recognition, the method is commonly used for target recognition, target tracking, scene recognition and the like. The camera can work in an external trigger mode, and exposure is completed when a trigger signal is received; however, due to the difference in the target brightness and the shooting technology of the multi-view camera, the exposure time is often different during the process of taking a picture, thereby causing the difference in the synchronism of the multi-view flare light.
Most of the existing multi-view vision cameras are oriented in the same direction, and the light incoming quantity is approximate, so that the influence of exposure time on the synchronization precision is small; however, when the multi-view vision camera is located in different directions and has obvious scene brightness difference, the exposure synchronization precision of the multi-view vision is poor.
Disclosure of Invention
In order to solve the above technical problem, it is an object of the present invention to provide a multi-view vision adaptive exposure synchronization method, including:
triggering the monocular vision cameras to carry out first exposure, and acquiring high level signals generated by the monocular vision cameras, wherein the high level signal holding time is the exposure time of the second exposure;
and triggering the plurality of monocular vision cameras to carry out the second exposure at the midpoint time of the exposure time of the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronization time of the synchronization signal generated by the FPGA.
By adopting the technical scheme, the synchronization time is the midpoint time of the holding time of the high-level signal in the synchronization signal.
By adopting the technical scheme, before the plurality of monocular vision cameras are triggered to carry out the first exposure, the plurality of monocular vision cameras are configured, and the plurality of monocular vision cameras are set to be in an external trigger mode.
By adopting the technical scheme, the monocular vision cameras are arranged in different directions or in the same direction.
By adopting the technical scheme, after high level signals generated by a plurality of monocular vision cameras are acquired, the image time stamps are recorded, the exposure starting time of the image time stamps is the rising edge time of the high level signals, and the exposure finishing time of the image time stamps is the falling edge time of the high level signals.
By adopting the technical scheme, when the monocular vision cameras are triggered to carry out the first exposure, the image data is obtained after the effective frame data is received.
By adopting the technical scheme, the image data collected by the monocular vision cameras are cached in the FPGA, and the image data are sent to the host through the network interface.
By adopting the technical scheme, the network interface of the FPGA is a GMII interface, and the data transmission is completed by adopting a gigabit Ethernet of a UDP protocol.
By adopting the technical scheme, exposure compensation is carried out on the second exposure by using the exposure time of the second exposure, one half time of the exposure time of the second exposure is used as an exposure compensation parameter for synchronization of the second exposure, and the monocular vision cameras are triggered to carry out the second exposure at the midpoint moment of the exposure time.
It is another object of the present invention to provide a multi-view vision adaptive exposure synchronization system, the system comprising:
the acquisition module is used for acquiring high level signals generated by the monocular vision cameras when the monocular vision cameras are triggered to perform first exposure, and the high level signal holding time is the exposure time of the second exposure;
and the exposure synchronization module is used for triggering the monocular vision cameras to carry out the second exposure when the midpoint time of the exposure time of the second exposure is the same as the synchronization time of the synchronization signal generated by the FPGA. .
Compared with the prior art: according to the invention, the high level signal is obtained when the vision camera is exposed for the first time, the holding time of the high level signal is the exposure time of the second exposure, and the multiple monocular vision cameras are triggered to carry out the second exposure at the midpoint time of the exposure time of the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronous time of the synchronous signal generated by the FPGA, and the exposure synchronization of the monocular vision cameras can be realized according to the change of the position and the scene, therefore, the monocular vision cameras are not limited by the position and the scene, the burst synchronization precision of the monocular vision is obviously improved, and the method is easy to popularize.
Drawings
Fig. 1 is a schematic structural diagram of an image capturing device according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of a binocular vision adaptive exposure synchronization method disclosed in the embodiment of the present invention.
Fig. 3 is another schematic diagram of a binocular vision adaptive exposure synchronization method according to an embodiment of the present invention.
Fig. 4 is a schematic view of synchronization of exposure times of binocular vision cameras in the binocular vision adaptive exposure synchronization method disclosed in the embodiment of the present invention.
Fig. 5 is a schematic structural diagram of a binocular vision adaptive exposure synchronization system disclosed in the embodiment of the present invention.
The reference numbers in the figures illustrate: 11. a monocular vision camera; 12. an FPGA; 13. a host; 21. an acquisition module; 211. a recording unit; 212. an exposure calculation unit; 22. an exposure synchronization module; 221. an adaptive exposure compensation unit.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In order to more clearly illustrate the multi-view vision adaptive exposure synchronization method and system disclosed by the embodiment of the invention. One of the preferred embodiments of the present invention describes a binocular vision adaptive exposure synchronization method and system in detail, and the binocular vision adaptive exposure synchronization method and system are extended to the synchronization application of the multi-view vision.
The invention discloses a binocular vision self-adaptive exposure synchronization method and system, which can realize self-adaptive exposure synchronization of a binocular vision camera according to the change of positions and scenes, so that the middle points of the exposure time of the binocular vision camera are consistent, and the synchronization precision of the binocular vision can be obviously improved. The following are detailed below.
In order to better understand the binocular vision adaptive exposure synchronization method and system disclosed in the embodiments of the present invention, a structure of an image capturing device to which the embodiments of the present invention are applicable is described below.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an image capturing device according to an embodiment of the present invention. In the configuration shown in fig. 1, the image capturing apparatus includes binocular vision cameras (i.e., two monocular vision cameras 11), an FPGA12, and a host 13, the FPGA12 is connected to the binocular vision cameras through a DVP interface, and the FPGA12 is connected to the host 13 through an ethernet. Where FPGA12 refers to a field programmable gate array; in addition, the binocular vision camera (i.e., the two monocular vision cameras 11) is two CMOS sensor modules which are independently controlled by the FPGA12, and external trigger signals thereof are provided by the FPGA 12. The CMOS sensor module is preferably a monochrome global shutter sensor, and in the mode, all pixels are exposed at the same time, so that the generation of a jelly effect is effectively avoided.
In the image acquisition device shown in fig. 1, when the device is started, the connection between the host 13 and the FPGA12 is established, after the host 13 meets the image receiving condition according to the actual application requirement, a system starting instruction is sent to the FPGA12, the device is initialized, the register of the CMOS sensor module is configured and set to be in an external trigger mode, the FPGA12 generates a trigger signal to trigger the CMOS sensor module to acquire image data, and the acquired binocular real-time image data is transmitted to the host 13. The embodiments of the present invention are not limited.
Based on the image acquisition equipment shown in fig. 1, the embodiment of the invention discloses a binocular vision self-adaptive exposure synchronization method. Referring to fig. 2 and fig. 3, fig. 2 is a schematic flowchart of a binocular vision adaptive exposure synchronization method according to an embodiment of the present invention. The methods described in fig. 2 and 3 may be applied to the image acquisition apparatus shown in fig. 1. The binocular vision self-adaptive exposure synchronization method can comprise the following steps of:
s201, triggering the two monocular vision cameras 11 to perform first exposure, and acquiring high level signals generated by the monocular vision cameras 11, wherein the high level signal holding time is the exposure time of the second exposure.
Illustratively, the binocular vision cameras (i.e., the two monocular vision cameras 11) are located in different directions, for example, the binocular vision cameras are located in two opposite directions of the FPGA12, although the binocular vision cameras may also be in different directions in other forms, and even the binocular vision cameras in the embodiment of the present invention may also be in the same direction, which is not limited by the present invention.
Illustratively, the FPGA12 generates a trigger signal, the two monocular vision cameras 11 perform a first exposure after receiving the trigger signal from the FPGA12, the two monocular vision cameras 11 generate a high level signal, and the high level holding time is the exposure time of the second exposure. Specifically, after the FPGA12 acquires the high level signals generated by the two monocular vision cameras 11, the image time stamp is recorded, the exposure start time of the image time stamp is the high level signal rising edge time, and the exposure end time of the image time stamp is the high level signal falling edge time. For example, the image timestamp accuracy is 10 ns.
Illustratively, when the FPGA12 triggers the first exposure, image data is obtained after receiving valid frame data, the image data is respectively cached in two memories of the FPGA12 and sent to the host 13 through a network interface, the network interface of the FPGA12 is a GMII interface, data transmission is completed by using a gigabit ethernet in a UDP protocol, and an actual data transmission rate is 96 MB/s.
Illustratively, the first exposure of an embodiment of the present invention is considered to be an invalid synchronization, as detailed in the first exposure shown in FIG. 3. In fig. 3, the midpoints of the exposure times of the high-level signals generated by the two monocular vision cameras CAM0 and CAM1 in the first exposure are not aligned, that is, the exposures of CAM0 and CAM1 are not synchronized.
S202, when the midpoint time of the exposure time of the second exposure is reached, the monocular vision cameras are triggered to conduct the second exposure, and the midpoint time of the exposure time of the second exposure is enabled to be consistent with the synchronization time of the synchronization signal generated by the FPGA.
Optionally, the synchronization time is a midpoint time of a holding time of a high-level signal in the synchronization signal.
Illustratively, exposure compensation is performed on the second exposure by using the exposure time of the second exposure, and the output time of the trigger signal triggering the second exposure is obtained. Specifically, one half of the exposure time of the second exposure is used as the exposure compensation parameter for synchronization of the second exposure, and the monocular vision cameras 11 are triggered to perform the second exposure at the midpoint time of the exposure time, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronization time of the synchronization signal. By using the exposure time of the high level signal generated by the last exposure to participate in the next exposure compensation, the adaptive exposure synchronization of the monocular vision cameras 11 is realized, and the alignment of the middle time of the exposure time of the two monocular vision cameras 11 with the synchronization time of the synchronization signal, that is, the exposure midpoint, is ensured, as shown in fig. 4 in detail.
Illustratively, the synchronization signal is a synchronization signal of a synchronization frequency generated by the FPGA12, the synchronization frequency of the synchronization signal corresponds to a synchronization time, the synchronization time is an exposure interval time set by the system, the exposure interval time is an interval time of two adjacent exposures, the synchronization time is a midpoint time of a holding time of a high level signal in the synchronization signal, and when the synchronization time is consistent with the midpoint time of the exposure time of the high level signal generated by the second exposure, the system achieves exposure synchronization, which is shown in fig. 4 in detail.
It can be seen that in the method described in fig. 2, in the embodiment of the present invention, a high level signal is obtained when the vision camera is exposed for the first time, the high level signal holding time is the exposure time of the second exposure, and when the midpoint time of the exposure time of the second exposure is, the multiple monocular vision cameras 11 are triggered to perform the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronization time of the synchronization signal generated by the FPGA, and the exposure synchronization of the binocular vision cameras can be realized according to the change of the position and the scene, so that the binocular vision cameras are not limited by the position and the scene, the burst synchronization accuracy of the binocular vision is significantly improved, and the method is easy to popularize.
Based on the image acquisition equipment shown in fig. 1, the embodiment of the invention discloses a binocular vision self-adaptive exposure synchronization system. Referring to fig. 5, fig. 5 is a schematic diagram illustrating a binocular vision adaptive exposure synchronization system according to an embodiment of the present invention. The binocular vision adaptive exposure synchronization system described in fig. 5 may be applied to an image capturing device, and may include the following modules:
the acquiring module 21 is configured to acquire high level signals generated by the monocular vision cameras when the monocular vision cameras are triggered to perform the first exposure, where a high level signal holding time is an exposure time of the second exposure.
And the exposure synchronization module 22 is configured to trigger the two monocular vision cameras 11 to perform the second exposure at the midpoint time of the exposure time of the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronization time of the synchronization signal generated by the FPGA.
Optionally, the synchronization time is a midpoint time of a holding time of a high-level signal in the synchronization signal.
According to the invention, the high level signal is obtained when the vision camera is exposed for the first time, the holding time of the high level signal is the exposure time of the second exposure, and the plurality of monocular vision cameras 11 are triggered to carry out the second exposure at the midpoint time of the exposure time of the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronous time of the synchronous signal generated by the FPGA, and the exposure synchronization of the binocular vision cameras can be realized according to the change of the position and the scene, therefore, the binocular vision cameras are not limited by the position and the scene, the burst synchronization precision of the binocular vision is obviously improved, and the method is easy to popularize. .
As a possible implementation, the acquiring module 21 includes a recording unit 211, and after the acquiring module 21 acquires the high level signals generated by the monocular vision cameras, the recording unit 211 is configured to record an image timestamp of the image timestamp, where an exposure start time of the image timestamp is a rising edge time of the high level signal, and an exposure end time of the image timestamp is a falling edge time of the high level signal. The details are as above, and the present invention is not described herein.
As a possible implementation, the acquisition module 21 includes an image data acquisition module 212, and the image data acquisition module 212 is used for acquiring image data when the two monocular vision cameras 11 are exposed for the first time.
As a possible implementation, the exposure synchronization module 22 includes an adaptive exposure compensation unit 221, and the adaptive exposure compensation unit 221 is configured to perform exposure compensation on the second exposure using the exposure time of the acquired high-level signal, and obtain an output time of a trigger signal for triggering the second exposure. Further, the adaptive exposure compensation unit 221 is configured to trigger the plurality of monocular vision cameras to perform the second exposure at a midpoint time of the exposure time using a half time of the exposure time of the acquired high-level signal as the exposure compensation parameter of the second exposure synchronization, so that the midpoint time of the exposure time of the second exposure coincides with the synchronization time of the synchronization signal. The details are as above, and the present invention is not described herein.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.

Claims (10)

1. A multi-view vision adaptive exposure synchronization method, the method comprising:
triggering the monocular vision cameras to carry out first exposure, and acquiring high level signals generated by the monocular vision cameras, wherein the high level signal holding time is the exposure time of the second exposure;
and triggering the plurality of monocular vision cameras to carry out the second exposure at the midpoint time of the exposure time of the second exposure, so that the midpoint time of the exposure time of the second exposure is consistent with the synchronization time of the synchronization signal generated by the FPGA.
2. The multi-view visual depth exposure synchronization method of claim 1, wherein: the synchronization time is the middle point time of the holding time of the high-level signal in the synchronization signal.
3. The multi-view vision adaptive exposure synchronization method of claim 1, wherein: before triggering the monocular vision cameras to carry out first exposure, configuring the monocular vision cameras and setting the monocular vision cameras into an external triggering mode.
4. The multi-view vision adaptive exposure synchronization method of claim 1, wherein: the plurality of monocular vision cameras are arranged in different directions or in the same direction.
5. The multi-view vision adaptive exposure synchronization method of claim 1, wherein: and after high-level signals generated by a plurality of monocular vision cameras are acquired, recording image time stamps of the monocular vision cameras, wherein the exposure starting time of the image time stamps is the rising edge time of the high-level signals, and the exposure ending time of the image time stamps is the falling edge time of the high-level signals.
6. The multi-view vision adaptive exposure synchronization method of claim 1, wherein: and when the monocular vision cameras are triggered to perform the first exposure, acquiring image data after receiving effective frame data.
7. The multi-view vision adaptive exposure synchronization method of claim 6, wherein: image data collected by the monocular vision cameras are cached in the FPGA, and the image data are sent to the host through the network interface.
8. The multi-view visual depth exposure synchronization method of claim 7, wherein: and the network interface of the FPGA is a GMII interface, and the gigabit Ethernet of a UDP protocol is adopted to complete data transmission.
9. The multi-view visual depth exposure synchronization method of claim 1, wherein: and carrying out exposure compensation on the second exposure by using the exposure time of the second exposure, taking one half of the exposure time of the second exposure as an exposure compensation parameter synchronous with the second exposure, and triggering the monocular vision cameras to carry out the second exposure at the midpoint moment of the exposure time.
10. A multi-view vision adaptive exposure synchronization system, the system comprising:
the acquisition module is used for acquiring high level signals generated by the monocular vision cameras when the monocular vision cameras are triggered to perform first exposure, and the high level signal holding time is the exposure time of the second exposure;
and the exposure synchronization module is used for triggering the monocular vision cameras to carry out the second exposure when the midpoint time of the exposure time of the second exposure is the same as the synchronization time of the synchronization signal generated by the FPGA.
CN202110136811.4A 2021-02-01 2021-02-01 Multi-view vision self-adaptive exposure synchronization method and system Pending CN112929568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110136811.4A CN112929568A (en) 2021-02-01 2021-02-01 Multi-view vision self-adaptive exposure synchronization method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110136811.4A CN112929568A (en) 2021-02-01 2021-02-01 Multi-view vision self-adaptive exposure synchronization method and system

Publications (1)

Publication Number Publication Date
CN112929568A true CN112929568A (en) 2021-06-08

Family

ID=76169257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110136811.4A Pending CN112929568A (en) 2021-02-01 2021-02-01 Multi-view vision self-adaptive exposure synchronization method and system

Country Status (1)

Country Link
CN (1) CN112929568A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180115683A1 (en) * 2016-10-21 2018-04-26 Flux Planet, Inc. Multiview camera synchronization system and method
CN108781259A (en) * 2017-07-31 2018-11-09 深圳市大疆创新科技有限公司 A kind of control method of image taking, control device and control system
CN110319815A (en) * 2019-05-17 2019-10-11 中国航空工业集团公司洛阳电光设备研究所 A kind of polyphaser synchronization exposure system and method based on annular connection structure
CN111955001A (en) * 2018-04-09 2020-11-17 脸谱科技有限责任公司 System and method for synchronizing image sensors

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180115683A1 (en) * 2016-10-21 2018-04-26 Flux Planet, Inc. Multiview camera synchronization system and method
CN108781259A (en) * 2017-07-31 2018-11-09 深圳市大疆创新科技有限公司 A kind of control method of image taking, control device and control system
CN111955001A (en) * 2018-04-09 2020-11-17 脸谱科技有限责任公司 System and method for synchronizing image sensors
CN110319815A (en) * 2019-05-17 2019-10-11 中国航空工业集团公司洛阳电光设备研究所 A kind of polyphaser synchronization exposure system and method based on annular connection structure

Similar Documents

Publication Publication Date Title
JP7139452B2 (en) Image processing method, computer readable storage medium, and electronic device
CN108781259B (en) Image shooting control method, control device and control system
US20070013807A1 (en) Digital camera
CN110505466B (en) Image processing method, device, electronic equipment, storage medium and system
US11184539B2 (en) Intelligent dual-lens photographing device and photographing method therefor
CN203352678U (en) Panoramic image hardware imaging device with six camera heads
CN103108125B (en) A kind of capture Synchronizing Control Devices of multicamera system and method thereof
JP2014011529A (en) Imaging device, imaging system, imaging method, and program
US20190149702A1 (en) Imaging apparatus
CN111556224B (en) Multi-camera synchronous calibration method, device and system
CN107235008B (en) Vehicle-mounted driving-assisting panoramic image system and panoramic image acquisition method
CN112261283A (en) Synchronous acquisition method, device and system of high-speed camera
US11032477B2 (en) Motion stabilized image sensor, camera module and apparatus comprising same
KR20190118548A (en) Image sensor, and control system
Litos et al. Synchronous image acquisition based on network synchronization
CN109842750B (en) Image pickup apparatus system
CN114627249A (en) Three-dimensional scanning system and three-dimensional scanning method
CN109302567A (en) Camera image low latency synchronization system and image low latency synchronous method
CN110636202A (en) Panoramic camera control method and device and storage medium
CN112995524A (en) High-precision acquisition vehicle, and photo exposure information generation system, method and synchronization device thereof
CN112929568A (en) Multi-view vision self-adaptive exposure synchronization method and system
CN106530355B (en) A kind of multi-lens camera synchronous data collection method based on fiber optic Ethernet
KR102617898B1 (en) Synchronization of image capture from multiple sensor devices
CN112929575A (en) Multi-view visual depth exposure synchronization method and system
CN103139477A (en) Three-dimensional (3D) camera and method of stereo image obtaining

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215000 floor 6, building 5, building 3, Tianyun Plaza, No. 111, Wusongjiang Avenue, Guoxiang street, Wuzhong District, Suzhou City, Jiangsu Province

Applicant after: Suzhou Jiuwu interworking Intelligent Technology Co.,Ltd.

Address before: 1 / F, building B1, Dongfang Chuangzhi garden, 18 JinFang Road, Suzhou Industrial Park, 215000, Jiangsu Province

Applicant before: Suzhou Jiuwu Interchange Intelligent Technology Co.,Ltd.

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 215000 floor 6, building 5, building 3, Tianyun Plaza, No. 111, Wusongjiang Avenue, Guoxiang street, Wuzhong District, Suzhou City, Jiangsu Province

Applicant after: Suzhou Jiuwu Intelligent Technology Co.,Ltd.

Address before: 215000 floor 6, building 5, building 3, Tianyun Plaza, No. 111, Wusongjiang Avenue, Guoxiang street, Wuzhong District, Suzhou City, Jiangsu Province

Applicant before: Suzhou Jiuwu interworking Intelligent Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210608