[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113327291B - Calibration method for 3D modeling of remote target object based on continuous shooting - Google Patents

Calibration method for 3D modeling of remote target object based on continuous shooting Download PDF

Info

Publication number
CN113327291B
CN113327291B CN202110636162.4A CN202110636162A CN113327291B CN 113327291 B CN113327291 B CN 113327291B CN 202110636162 A CN202110636162 A CN 202110636162A CN 113327291 B CN113327291 B CN 113327291B
Authority
CN
China
Prior art keywords
image acquisition
acquisition device
target object
calibration
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110636162.4A
Other languages
Chinese (zh)
Other versions
CN113327291A (en
Inventor
左忠斌
左达宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianmu Aishi Beijing Technology Co Ltd
Original Assignee
Tianmu Aishi Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianmu Aishi Beijing Technology Co Ltd filed Critical Tianmu Aishi Beijing Technology Co Ltd
Priority to CN202110636162.4A priority Critical patent/CN113327291B/en
Publication of CN113327291A publication Critical patent/CN113327291A/en
Application granted granted Critical
Publication of CN113327291B publication Critical patent/CN113327291B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a calibration method for 3D modeling of a remote target object based on continuous shooting, which comprises the following steps of (1) selecting a calibration object A which is far away from a target object B by a certain distance; (2) shooting a plurality of images of the calibration object A by using acquisition equipment; (3) Moving and/or rotating the whole acquisition equipment, and continuously shooting until the target object B enters the field of view of the acquisition equipment; (4) the acquisition equipment acquires a plurality of images of the target object B; the calibration object is provided with a plurality of calibration points; and calibrating the coordinates of the target object according to the coordinates of the plurality of calibration points. The method realizes the absolute size calibration of the remote target object by a continuous shooting method in the moving or rotating process.

Description

Calibration method for 3D modeling of remote target object based on continuous shooting
Technical Field
The invention relates to the technical field of morphology measurement, in particular to the technical field of 3D morphology measurement.
Background
Currently, when 3D acquisition and measurement are performed by using a visual mode, a camera is usually rotated relative to a target object, or a plurality of cameras are arranged around the target object to perform acquisition simultaneously. For example, a Digital Emily project of university of California, a spherical bracket is adopted, and hundreds of cameras are fixed at different positions and different angles on the bracket, so that the 3D acquisition and modeling of a human body are realized. In either case, however, the camera needs to be relatively short from the object, at least within a arrangeable range, so that the camera can be formed to capture images of the object at different locations.
However, in some applications, image acquisition around the object is not possible. For example, when the monitoring probe collects a monitored area, it is difficult to set a camera around a target object or rotate the camera around the target object because the area is large, the distance is long, and the collection object is not fixed. How to perform 3D acquisition and modeling of a target object in this situation is a problem to be solved.
Even further problems are not addressed by how to get their exact dimensions for these distant objects even if 3D modeling is done, so that having a 3D model with absolute dimensions is not a problem. For example, when modeling a building at a distance, in order to obtain its absolute dimensions, it is common in the prior art to place a marker on or beside the building, and to obtain the size of the 3D model of the building based on the size of the marker. However, not all cases allow us to place a calibration object near the object, and even if a 3D model is obtained, the absolute size cannot be obtained, and thus the true size of the object cannot be known. For example, at a house on the bank of a river, a marker must be placed on the house if it is to be modeled, however this is difficult to accomplish if it is not possible to cross the river. In addition to being far away, there is also a problem that the distance is not far away, but the calibration object cannot be placed on the target object for some reason, for example, when the acquisition of the human body is performed, the calibration object cannot be placed on the human body, and how to obtain the absolute size of the human body model becomes a huge problem.
In addition, it has also been proposed in the prior art to define camera position using empirical formulas including rotation angle, target size, object distance, thereby compromising the speed of synthesis and the effect. However, in practical applications, it was found that: unless an accurate angle measuring device is provided, the user is insensitive to the angle, and the angle is difficult to accurately determine; the size of the target is difficult to accurately determine, for example, in a scene of constructing the 3D model of the river-side house. And the error of measurement causes the camera position to set up the error, thus can influence and gather the synthetic speed and result; further improvements in accuracy and speed are needed.
Therefore, the following technical problems are urgently needed to be solved: (1) 3D information of a distant and nonspecific target can be acquired; (2) meanwhile, the synthesis speed and the synthesis precision are both considered. (3) The three-dimensional absolute size of a far object or an object unsuitable for placing a calibration object can be accurately and conveniently obtained.
Disclosure of Invention
In view of the above, the present invention has been made to provide a calibration method that overcomes or at least partially solves the above-mentioned problems.
The invention provides a calibration method for 3D modeling of a remote target object based on continuous shooting, which comprises the following steps:
(1) Selecting a calibration object A which is arranged at a distance away from the target object B;
(2) Shooting a plurality of images of the calibration object A by using acquisition equipment;
(3) Moving and/or rotating the whole acquisition equipment, and continuously shooting until the target object B enters the field of view of the acquisition equipment;
(4) The acquisition equipment acquires a plurality of images of the target object B;
the calibration object is provided with a plurality of calibration points;
calibrating coordinates of the target object according to the coordinates of the plurality of calibration points, including: extracting feature points of all the photographed pictures, and matching the feature points to obtain model coordinate values of the object A and the object B; and calibrating the absolute coordinates of the target object according to the absolute coordinates of the calibration point and the model coordinates.
Optionally, during the moving or rotating process of the collecting device, the following conditions are satisfied: the intersection of the three images acquired correspondingly by the adjacent three acquisition positions is not empty.
Optionally, the acquisition device is a 3D intelligent vision device, and includes an image acquisition device and a rotation device;
the rotating device is used for driving the acquisition area of the image acquisition device to generate relative motion with the target;
and the image acquisition device is used for acquiring a group of images of the target object through the relative motion.
Optionally, the position of the image acquisition device when rotating to acquire a group of images meets the following conditions:
Wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length of the photosensitive element of the image acquisition device; m is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; μ is an empirical coefficient.
Optionally, when the acquisition device is a 3D intelligent image acquisition device, two adjacent acquisition positions of the 3D intelligent image acquisition device meet the following conditions:
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is the adjustment coefficient.
Optionally, extracting feature points of the acquired image, and matching the feature points to obtain sparse feature points; and inputting the matched characteristic point coordinates, and obtaining model coordinate values of the sparse model three-dimensional point clouds and the positions of the object A and the object B by utilizing the position and posture data of the resolving sparse three-dimensional point clouds and the photographed image acquisition equipment.
Alternatively, the absolute coordinate X of the calibration point on the calibration object is imported T 、Y T 、Z T And the picture template of the marked point is matched with all the input pictures to obtain the pixel row number x containing the marked point in the input pictures i 、y i
Optionally, the method also comprises inputting the pixel row and column number x of the standard point according to the position and posture data of the photographing camera i 、y i Can calculate the coordinate (X) i 、Y i 、Z i );
According to the absolute coordinates of the calibration points and the model coordinates (X T 、Y T 、Z T ) And (X) i 、Y i 、Z i ) And 7 space coordinate conversion parameters of the model coordinates and the absolute coordinates are calculated by using a space similarity transformation formula.
Optionally, the method further comprises the step of converting coordinates of the three-dimensional point clouds of the object A and the object B and the position and posture data of the photographing camera into an absolute coordinate system by using the solved 7 parameters, so that the real size of the target object is obtained.
Alternatively, the absolute size of the target is obtained.
The invention also provides a 3D model construction method, and the method is used.
Inventive aspects and technical effects
1. The absolute size calibration of the remote target object is realized by a continuous shooting method in the moving or rotating process.
2. By optimizing the position of the camera for collecting the pictures, the synthesis speed and the synthesis precision can be improved simultaneously. When the camera acquisition position is optimized, the angle is not required to be measured, the size of the target is not required to be measured, and the applicability is stronger.
3. The method is characterized in that the camera optical axis rotates in a mode of forming a certain included angle with the turntable instead of being parallel to the turntable to collect the image of the target object, 3D synthesis and modeling are realized, rotation around the target object is not needed, and the adaptability of a scene is improved.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 is a schematic diagram of a calibration object photographed by an acquisition device in an embodiment of the present invention;
fig. 2 is a schematic view of shooting during rotation of the collecting device in the embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating an acquisition device according to an embodiment of the present invention turning to a target direction to shoot a target;
FIG. 4 is a schematic diagram of shooting with a 3D intelligent vision device in an embodiment of the present invention;
FIG. 5 is another schematic view of shooting with a 3D smart vision device in an embodiment of the present invention;
FIG. 6 is a schematic diagram of an acquisition device with a rotating structure for an acquisition area moving device according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of autorotation shooting of an airborne acquisition device in an embodiment of the invention;
fig. 8 is a schematic diagram of a vehicle-mounted acquisition device in a straight driving shooting mode in an embodiment of the invention;
wherein, 1 target object, 2 rotating device, 3 rotating device, 4 image acquisition device.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
3D acquisition calibration flow
Referring to fig. 1-3, when the object to be collected is B, a calibration object a may be placed around the B, but in many cases, the calibration object a cannot be placed near the object B. At this time, it is possible to:
(1) And selecting a distance away from the target object B to set the calibration object A.
(2) And shooting an image of the calibration object A by using acquisition equipment.
(3) Moving and/or rotating the whole acquisition device, continuously shooting until the object B enters the field of view of the acquisition device
(4) The whole rotation of the acquisition device can be stopped after the acquisition device is aligned with the target object B.
(5) The acquisition device acquires a plurality of images of the object B.
Of course, it is also possible to take the image of B first and then move the image until the image of a is taken. The process is reversed from that described above.
In step (3), the acquisition device acquires continuously during movement at time/space intervals, wherein the continuous acquisition should satisfy: the images P, Q, R acquired at the adjacent three acquisition positions should satisfy P n Q n R non-null so as to ensure that the information of the calibration object can be used for the calibration of the target object.
The specific method for the 3D intelligent vision equipment to collect the object image comprises the following steps: the motor drives the turntable to rotate to drive the camera to rotate, so that the optical axis position of the camera moves in space. For example, the image of the target object is acquired by the image acquisition device every L distance, and n images are acquired by the camera at different positions when the turntable rotates 360 degrees. The rotary table can collect the camera when rotating, and the camera can stop rotating after rotating to the corresponding collecting position, and the camera can continue rotating to the next collecting position after collecting. Because the state of the target object can change in some occasions, the acquisition speed needs to be improved, otherwise, the condition of the target object acquired by the image acquisition device in different images can not be 3D synthesized and modeled. This can be solved by two methods: (1) the n image acquisition devices are arranged on the turntable, so that n images can be shot at one time, and n images can be obtained at the next position. (2) In order to save the cost at the same time, although the number of image pickup devices is not increased, the rotation speed of the turntable can be increased, but the shutter of the image pickup devices needs to be adjusted to a faster mode, otherwise image blurring is caused. While an increase in shutter speed requires better light source illumination conditions. Thus, a scene that provides a better light source or has better natural light is required to use the method. Of course, besides 3D intelligent vision devices, common image acquisition devices can also be used for the above calibration, their specific structure will be described in detail below. The above-mentioned movement of any collection device may be by hand-held, rail, unmanned aerial vehicle-borne, etc. in a variety of ways.
Calibration method
(1) The method comprises the steps that photos of A and B are obtained under different shooting angles through shooting equipment, A (or B) is shot firstly, shooting is carried out on a mobile camera until B (or A) is shot, and the number of shot photos is not less than 3;
(2) On the shot calibration object A, 4 (or more than 4) mark points with known coordinates are uniformly distributed, and a plurality of (more than 3) photos are simultaneously ensured to shoot the measured mark points, wherein the mark points are static and fixed.
(3) And extracting the characteristic points of all the photographed pictures, and matching the characteristic points. And acquiring sparse feature points. And inputting the matched characteristic point coordinates, and obtaining the model coordinate values of the sparse model three-dimensional point cloud and the position of the object A and the object B by utilizing the position and gesture data of the resolving sparse three-dimensional point cloud and the photographing camera.
(4) Absolute coordinate X of leading-in mark point T 、Y T 、Z T And the picture template of the mark point is matched with all the input pictures to obtain the pixel row number x containing the mark point in the input pictures i 、y i (or the pixel row and column numbers x of the mark points are obtained manually from the photo i 、y i );
(5) According to the position and posture data of the photographing camera in the step (3), inputting the pixel row and column number x of the mark point i 、y i Can calculate the coordinates (X i 、Y i 、Z i ) The method comprises the steps of carrying out a first treatment on the surface of the Based on the absolute coordinates of 4 (or more than 4) marker points and the model coordinates (X T 、Y T 、Z T ) And (X) i 、Y i 、Z i ) By using spatial phasesThe method comprises the steps of calculating 7 space coordinate conversion parameters of model coordinates and absolute coordinates according to a similar transformation formula; wherein εx, εy, εz, λ, X 0 、Y 0 、Z 0 7 parameters.
(6) And (3) converting coordinates of the three-dimensional point clouds of the object A and the object B and the position and posture data of the photographing camera into an absolute coordinate system by using the 7 parameters calculated in the step (5), and obtaining the real size and the dimension of the object B.
Utilize 3D intelligent vision equipment
Referring to fig. 4, the image capturing device comprises an image capturing device 4, a rotating device 2 and a cylindrical housing. As shown in fig. 4, the image pickup device 4 is mounted on the rotating device 2, and the rotating device 2 is accommodated in a cylindrical housing and is freely rotatable therein.
The image acquisition device 4 is used for acquiring a group of images of the target object through the relative movement of the acquisition area of the image acquisition device 4 and the target object; and the acquisition area moving device is used for driving the acquisition area of the image acquisition device 4 to generate relative motion with the target. The acquisition area is the effective field of view range of the image acquisition device.
The image acquisition device 4 may be a camera and the rotation device 2 may be a turntable. The camera is arranged on the rotary table, the optical axis of the camera forms a certain included angle with the rotary table, and the rotary table surface is approximately parallel to the object to be acquired. The turntable drives the camera to rotate, so that the camera can acquire images of the target object at different positions.
Further, the camera is mounted on the turntable by an angle adjusting device, which can be rotated to adjust the included angle between the optical axis of the image acquisition device 4 and the turntable surface, and the adjusting range is-90 ° < γ <90 °. When a closer object is shot, the optical axis of the image acquisition device 4 can be shifted towards the central axis of the turntable, namely, gamma is adjusted towards the-90 degrees. When the photographing cavity is arranged, the optical axis of the image acquisition device 4 can be offset in the direction deviating from the central axis of the turntable, namely, gamma is adjusted in the direction of 90 degrees. The adjustment can be completed manually, a distance measuring device can be arranged for the 3D intelligent vision equipment, the distance between the distance measuring device and the target object is measured, and the gamma angle is automatically adjusted according to the distance.
The turntable can be connected with a motor through a transmission device, rotate under the drive of the motor and drive the image acquisition device 4 to rotate. The transmission may be a conventional mechanical structure such as a gear system or a belt.
In order to improve the acquisition efficiency, a plurality of image acquisition devices 4 may be provided on the turntable, as shown in fig. 5. The plurality of image acquisition devices 4 are distributed in sequence along the circumference of the turntable. For example, an image acquisition device 4 can be respectively arranged at two ends of any diameter of the turntable. The image acquisition devices 4 can be arranged at intervals of 60-degree circumferential angles, and 6 image acquisition devices 4 are uniformly arranged on the whole disc. The plurality of image capturing devices may be the same type of camera or different types of cameras. For example, a visible light camera and an infrared camera are arranged on the turntable, so that images with different wave bands can be acquired.
The image capturing device 4 is configured to capture an image of a target object, which may be a fixed focus camera or a zoom camera. In particular, the camera may be a visible light camera or an infrared camera. Of course, it should be understood that any device having an image capturing function may be used, and the device is not limited to the present invention, and may be, for example, a CCD, a CMOS, a camera, a video camera, an industrial camera, a monitor, a video camera, a mobile phone, a tablet, a notebook, a mobile terminal, a wearable device, a smart glasses, a smart watch, a smart bracelet, and all devices having an image capturing function.
The rotating device can be in various forms such as a rotating arm, a rotating beam, a rotating bracket and the like besides the rotating disc, and can only drive the image acquisition device to rotate. In either case, the optical axis of the image capturing device 4 has a certain angle γ with the rotation plane.
In general, the light sources are distributed around the lens of the image acquisition device in a dispersed manner, for example, the light sources are annular LED lamps around the lens and are positioned on the turntable; or may be provided in the cross section of the cylindrical housing. Because in some applications the object to be acquired is a human body, it is necessary to control the intensity of the light source, avoiding discomfort to the human body. In particular, a light-softening device, for example a light-softening housing, can be arranged in the light path of the light source. Or the LED area light source is directly adopted, so that the light is softer, and the light is more uniform. More preferably, an OLED light source may be used, which is smaller, softer to light, and flexible to attach to a curved surface. The light source may be positioned at other locations that provide uniform illumination of the target. The light source can also be an intelligent light source, namely, the light source parameters can be automatically adjusted according to the conditions of the target object and the ambient light.
When 3D acquisition is performed, the optical axis direction of the image acquisition device at different acquisition positions is unchanged relative to the target object, and is generally approximately perpendicular to the surface of the target object, and at this time, the positions of two adjacent image acquisition devices 4, or the two adjacent acquisition positions of the image acquisition devices 4, satisfy the following conditions:
μ<0.482
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length of a photosensitive element (CCD) of the image acquisition device; m is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; μ is an empirical coefficient.
D, taking a rectangular length when the two positions are along the length direction of the photosensitive element of the image acquisition device 4; when the two positions are along the width direction of the photosensitive element of the image acquisition device, d takes a rectangular width.
In either of the two positions, the distance from the photosensitive element to the surface of the object along the optical axis is set as M.
As described above, L should be the straight line distance between the optical centers of the two image capturing devices, but since the optical center position of the image capturing device 4 is not easily determined in some cases, the center of the photosensitive element of the image capturing device 4, the geometric center of the image capturing device 4, the center of the axis of connection of the image capturing device with the cradle head (or platform, stand), the center of the proximal end or distal end surface of the lens may be used instead in some cases, and the error caused by this is found to be within an acceptable range through experiments, so that the above range is also within the scope of the present invention.
By using the device provided by the invention, experiments are carried out, and the following experimental results are obtained.
From the above experimental results and a lot of experimental experience, it can be derived that the value of μ should satisfy μ <0.482, at which time it is already possible to synthesize a partial 3D model, although some cannot be synthesized automatically, but it is acceptable in case of not high requirements, and the portion that cannot be synthesized can be compensated by manual or replacement algorithm. In particular, when the value of μ satisfies μ <0.357, the balance between the synthesis effect and the synthesis time can be optimally balanced; for better synthesis, μ <0.198 can be chosen, in which case the synthesis time increases, but the quality of the synthesis is better. And when μ is 0.5078, it is not synthesized. It should be noted here that the above ranges are merely preferred embodiments and do not constitute a limitation of the scope of protection.
The above data are obtained by experiments performed to verify the condition of the formula, and are not limiting on the invention. Even without this data, the objectivity of the formula is not affected. The person skilled in the art can adjust the parameters of the equipment and the details of the steps according to the requirement to perform experiments, and other data are obtained according with the formula.
The adjacent acquisition positions refer to two adjacent positions on a moving track, in which acquisition actions occur when the image acquisition device moves relative to a target object. This is generally well understood for image acquisition device motion. However, when the object moves to cause the relative movement of the two objects, the motion of the object is converted into the motion of the object, and the image acquisition device moves according to the relativity of the motion. At this time, two adjacent positions of the image acquisition device, at which acquisition actions occur in the converted movement track, are measured.
Using 3D image acquisition device
(1) The acquisition area moving device is a rotary structure
As shown in fig. 6, the object 1 is fixed at a certain position, and the rotation device 3 drives the image pickup device 4 to rotate around the object 1. The rotation device 3 can drive the image acquisition device 4 to rotate around the target object 1 through the rotation arm. Of course, the rotation is not necessarily a complete circular motion, and can be only rotated by a certain angle according to the acquisition requirement. And the rotation is not necessarily circular, and the motion track of the image acquisition device 4 can be other curve tracks, so long as the camera is ensured to shoot an object from different angles.
The rotation device 3 may also drive the image capturing device 4 to rotate, as shown in fig. 7, so that the image capturing device 4 can capture images of the target object from different angles through rotation.
The rotating device 3 can be in various forms such as a cantilever, a turntable, a track and the like, and can be held by hand, and a vehicle or an onboard vehicle can be used, so that the image acquisition device 4 can generate motion.
In addition to the above manner, in some cases, the camera may be fixed, and the stage carrying the object rotates, so that the direction of the object facing the image capturing device changes at any time, and the image capturing device is enabled to capture images of the object from different angles. However, in this case, the calculation can still be performed as converted into a motion of the image acquisition device, so that the motion corresponds to a corresponding empirical formula (which will be described in detail below). For example, in a scenario where the stage is rotated, it may be assumed that the stage is stationary and the image capture device is rotated. The distance of the shooting position when the image acquisition device rotates is set by utilizing an empirical formula, so that the rotating speed of the image acquisition device is deduced, the rotating speed of the objective table is reversely deduced, the rotating speed control is convenient, and the 3D acquisition is realized. Of course, such a scenario is not common, more common or the image acquisition device is rotated.
In addition, in order to enable the image acquisition device to acquire images of different directions of the target object, the image acquisition device and the target object can be kept still, and the image acquisition device and the target object can be realized by rotating the optical axis of the image acquisition device. For example: the acquisition area moving device is an optical scanning device, so that the acquisition area of the image acquisition device and the target generate relative motion under the condition that the image acquisition device does not move or rotate. The acquisition area moving device also comprises a light deflection unit which is mechanically driven to rotate or is electrically driven to deflect the light path or is arranged in a plurality of groups in space, so that images of the target object are obtained from different angles. The light deflection unit may typically be a mirror which is rotated such that images of the object in different directions are acquired. Or directly spatially arranging a mirror surrounding the object, in turn causing light from the mirror to enter the image acquisition device. Similarly to the foregoing, the rotation of the optical axis in this case can be regarded as the rotation of the virtual position of the image pickup device, and by this conversion method, it is assumed that the image pickup device is rotated, and thus calculation is performed using the following empirical formula.
The image acquisition device is used for acquiring an image of a target object, and can be a fixed-focus camera or a zoom camera. In particular, the camera may be a visible light camera or an infrared camera. Of course, it should be understood that any device having an image capturing function may be used, and the device is not limited to the present invention, and may be, for example, a CCD, a CMOS, a camera, a video camera, an industrial camera, a monitor, a video camera, a mobile phone, a tablet, a notebook, a mobile terminal, a wearable device, a smart glasses, a smart watch, a smart bracelet, and all devices having an image capturing function.
The device also comprises a processor, also called a processing unit, which is used for synthesizing a 3D model of the target object according to a 3D synthesis algorithm and obtaining 3D information of the target object according to a plurality of images acquired by the image acquisition device.
(2) The acquisition area moving device is of a translational structure
Besides the rotating structure, the image acquisition device can move relative to the target object in a linear track. For example, the image capturing device is located on a linear track or on a vehicle or an unmanned plane running in a straight line, please refer to fig. 8, and the image capturing device is kept not to rotate during the process of capturing images sequentially passing through the target object along the linear track. Wherein the linear track may also be replaced by a linear cantilever. But more preferably, when the whole image acquisition device moves along the linear track, the image acquisition device performs a certain rotation, so that the optical axis of the image acquisition device faces the target object.
(3) The acquisition area moving device is of a random movement structure
Sometimes, the movement of the acquisition area is irregular, for example, when the image acquisition device is held in a hand, or when the traveling route is an irregular route in a vehicle or on board, and at this time, it is difficult to move in a strict track, and the movement track of the image acquisition device is difficult to accurately predict. Therefore, how to ensure that the photographed image can accurately and stably synthesize the 3D model is a big problem in this case, and has not been mentioned yet. A more common approach is to take multiple pictures, with redundancy in the number of pictures to solve the problem. However, the result of the synthesis is not stable. Although there are some ways to improve the composition by limiting the rotation angle of the camera, in practice the user is not sensitive to the angle, and even if the preferred angle is given, it is difficult for the user to operate in case of hand-held shooting. Therefore, the invention provides a method for improving the synthesis effect and shortening the synthesis time by limiting the moving distance of the twice photographing camera.
In the case of irregular motion, a sensor may be provided in the mobile terminal or the image pickup device, and the linear distance moved by the image pickup device at the time of two shots may be measured by the sensor, and when the movement distance does not satisfy the above-described experience condition regarding L (specifically, the following condition), an alarm may be given to the user. The alarm includes sounding or lighting an alarm to the user. Of course, the distance of the user moving and the movable maximum distance L can be displayed on the screen of the mobile phone when the user moves the image acquisition device or prompted by voice in real time. The sensor for realizing the function comprises: rangefinders, gyroscopes, accelerometers, positioning sensors, and/or combinations thereof.
(4) Multi-camera mode
It can be understood that, besides the camera and the target object relatively move so that the camera can shoot images of different angles of the target object, a plurality of cameras can be arranged at different positions around the target object, so that the aim of shooting images of different angles of the target object at the same time can be achieved.
When the acquisition area moves relative to the target object, particularly the image acquisition device rotates around the target object, the optical axis direction of the image acquisition device at different acquisition positions changes relative to the target object during 3D acquisition, and at the moment, the positions of two adjacent image acquisition devices or the two adjacent acquisition positions of the image acquisition device meet the following conditions:
δ<0.603
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length or width of a photosensitive element (CCD) of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is the adjustment coefficient.
D, taking a rectangular length when the two positions are along the length direction of the photosensitive element of the image acquisition device; when the two positions are along the width direction of the photosensitive element of the image acquisition device, d takes a rectangular width.
When the image acquisition device is at any one of two positions, the distance from the photosensitive element to the surface of the target object along the optical axis is taken as T. In addition to this method, in another case L is A n 、A n+1 The straight line distance between the optical centers of the two image acquisition devices is equal to A n 、A n+1 Adjacent a of two image acquisition devices 4 n-1 、A n+2 Two image acquisition devices and A n 、A n+1 The distance from each photosensitive element of the two image acquisition devices to the surface of the target object 1 along the optical axis is T respectively n-1 、T n 、T n+1 、T n+2 ,T=(T n-1 +T n +T n+1 +T n+2 )/4. Of course, the average value calculation may be performed not only by the adjacent 4 positions but also by more positions.
By using the device provided by the invention, experiments are carried out, and the following experimental results are obtained.
The camera lens was replaced and the experiment was repeated, the following experimental results were obtained.
The camera lens was replaced and the experiment was repeated, the following experimental results were obtained.
As described above, L should be the straight line distance between the optical centers of the two image capturing devices, but since the optical center position of the image capturing device is not easily determined in some cases, the center of the photosensitive element of the image capturing device, the geometric center of the image capturing device, the center of the axis of connection of the image capturing device with the cradle head (or platform, bracket), the center of the proximal end or distal end surface of the lens may be used instead in some cases, and the error caused by this is found to be within an acceptable range through experiments, so the above range is also within the scope of the present invention.
In general, in the prior art, parameters such as an object size and a field angle are used as a mode for estimating a camera position, and a positional relationship between two cameras is also expressed by an angle. The angle is inconvenient in practical use because the angle is not well measured in practical use. And, the object size may change as the measurement object changes. For example, after 3D information of an office building is collected, when a pavilion is collected again, the measurement is needed to be re-measured and reckoned again. The inconvenient measurement and repeated measurement bring about errors in measurement, thereby causing errors in camera position estimation. According to the scheme, according to a large amount of experimental data, the empirical condition which needs to be met by the position of the camera is provided, so that not only is the angle which is difficult to accurately measure measured avoided, but also the size and the dimension of an object do not need to be directly measured. In the experience condition, d and f are fixed parameters of the camera, and when the camera and the lens are purchased, the manufacturer can give corresponding parameters without measurement. T is only a straight line distance, and can be conveniently measured by using a traditional measuring method, such as a ruler and a laser range finder. Therefore, the empirical formula of the invention makes the preparation process convenient and quick, and improves the arrangement accuracy of the camera positions, so that the cameras can be arranged in the optimized positions, thereby simultaneously taking into account the 3D synthesis accuracy and speed.
From the above experimental results and a lot of experimental experience, it can be derived that the value of δ should satisfy δ <0.603, and at this time, a partial 3D model can be synthesized, and although some parts cannot be synthesized automatically, it is acceptable in case of low requirements, and the part that cannot be synthesized can be compensated by manual or replacement algorithm. Particularly, when the value of delta satisfies delta <0.410, the balance between the synthesis effect and the synthesis time can be optimally considered; delta <0.356 can be chosen for better synthesis, where the synthesis time increases but the quality of the synthesis is better. Of course, to further enhance the effect of the synthesis, δ <0.311 may be selected. And when δ is 0.681, it is not synthesized. It should be noted here that the above ranges are merely preferred embodiments and do not constitute a limitation of the scope of protection.
And as can be seen from the above experiments, for determining the photographing position of the camera, only the camera parameters (focal length f, CCD size) and the distance T between the camera CCD and the object surface need to be obtained according to the above formula, which makes it easy to design and debug the device. Since the camera parameters (focal length f, CCD size) are already determined at the time of purchase of the camera and are indicated in the product description, they are readily available. The camera position can be calculated easily from the above formula without the need for cumbersome angle of view measurements and object size measurements. Particularly, in some occasions, a camera lens needs to be replaced, and then the method can obtain the camera position by directly replacing the conventional parameter f of the lens and calculating; similarly, when different objects are collected, the measurement of the object size is also complicated due to the different sizes of the objects. By using the method of the invention, the camera position can be more conveniently determined without measuring the object size. The camera position determined by the invention can be used for combining time and combining effect. Thus, the above empirical condition is one of the inventive aspects of the present invention.
The above data are obtained by experiments performed to verify the condition of the formula, and are not limiting on the invention. Even without this data, the objectivity of the formula is not affected. The person skilled in the art can adjust the parameters of the equipment and the details of the steps according to the requirement to perform experiments, and other data are obtained according with the formula.
The rotation motion of the invention is that the previous position acquisition plane and the subsequent position acquisition plane are crossed instead of parallel in the acquisition process, or the optical axis of the previous position image acquisition device and the optical axis of the subsequent position image acquisition position are crossed instead of parallel. That is, the movement of the acquisition region of the image acquisition device around or partially around the object can be considered as a relative rotation of the two. Although more orbital rotational motion is exemplified in the embodiments of the present invention, it is understood that the limitations of the present invention may be used as long as non-parallel motion between the acquisition region of the image acquisition device and the target object is rotational. The scope of the invention is not limited to orbital rotation in the embodiments.
The adjacent acquisition positions refer to two adjacent positions on a moving track, in which acquisition actions occur when the image acquisition device moves relative to a target object. This is generally well understood for image acquisition device motion. However, when the object moves to cause the relative movement of the two objects, the motion of the object is converted into the motion of the object, and the image acquisition device moves according to the relativity of the motion. At this time, two adjacent positions of the image acquisition device, at which acquisition actions occur in the converted movement track, are measured.
3D synthesis modeling device and method
The processor is also called a processing unit and is used for synthesizing a 3D model of the target object according to a plurality of images acquired by the image acquisition device and a 3D synthesis algorithm to obtain 3D information of the target object. The image acquisition device sends the acquired images to the processing unit, and the processing unit obtains 3D information of the target object according to the images in the group of images. Of course, the processing unit may be directly disposed in the housing where the image capturing device is located, or may be connected to the image capturing device through a data line or through a wireless manner. For example, an independent computer, a server, a cluster server, or the like may be used as the processing unit, and the image data acquired by the image acquisition device may be transmitted to the processing unit for 3D synthesis. Meanwhile, the data of the image acquisition device can be transmitted to the cloud platform, and the 3D synthesis is performed by utilizing the powerful computing capacity of the cloud platform.
The processing unit performs the following method:
1. and performing image enhancement processing on all the input photos. The following filters are used to enhance the contrast of the original photograph and to suppress noise at the same time.
Wherein: g (x, y) is the gray value of the original image at (x, y), f (x, y) is the gray value of the original image at (x, y) after being enhanced by a Wallis filter, m g Is the local gray level mean value s of the original image g Is the standard deviation of local gray scale of the original image, m f S is the local gray target value of the transformed image f The target value of the local gray standard deviation of the transformed image is obtained. c epsilon (0, 1) is the expansion constant of the image variance, and b epsilon (0, 1) is the image brightness coefficient constant.
The filter can greatly enhance image texture modes with different scales in the image, so that the number and the precision of feature points can be improved when the point features of the image are extracted, and the reliability and the precision of a matching result are improved when the photo features are matched.
2. And extracting feature points of all the input images, and matching the feature points to obtain sparse feature points. And extracting and matching the feature points of the images by adopting a SURF operator. The SURF feature matching method mainly comprises three processes, namely feature point detection, feature point description and feature point matching. The method uses a Hessian matrix to detect feature points, uses a Box filter (Box Filters) to replace second-order Gaussian filtering, uses an integral image to accelerate convolution to improve calculation speed, and reduces the dimension of a local image feature descriptor to accelerate matching speed. The method comprises the following steps of (1) constructing a Hessian matrix, generating all interest points for feature extraction, and constructing the Hessian matrix for generating edge points (mutation points) with stable images; (2) constructing scale space feature point positioning, comparing each pixel point processed by a Hessian matrix with 26 points in a two-dimensional image space and a scale space adjacent area, preliminarily positioning key points, filtering out key points with weaker energy and incorrectly positioned key points, and screening out final stable feature points; (3) the main direction of the feature points is determined by adopting the Harr wavelet features in the circular neighborhood of the statistical feature points. In the circular neighborhood of the characteristic point, counting the sum of the horizontal and vertical harr wavelet characteristics of all points in a 60-degree fan, then rotating the fan at intervals of 0.2 radian and counting the value of the harr wavelet characteristics in the area again, and finally taking the direction of the fan with the largest value as the main direction of the characteristic point; (4) a 64-dimensional feature point description vector is generated, a rectangular region block of 4*4 is taken around the feature point, but the taken rectangular region direction is along the main direction of the feature point. Each sub-region counts haar wavelet characteristics for the horizontal and vertical directions of 25 pixels, where both horizontal and vertical directions are relative to the main direction. The haar wavelet feature is 4 directions of the sum of a horizontal direction value, a vertical direction value, a horizontal direction absolute value and a vertical direction absolute value, and the 4 values are taken as feature vectors of each sub-block area, so that 4 x 4 = 64-dimensional vectors are taken as descriptors of Surf features; (5) the feature points are matched, the matching degree is determined by calculating the Euclidean distance between the two feature points, and the shorter the Euclidean distance is, the better the matching degree of the two feature points is represented.
3. Inputting matched feature point coordinates, and calculating position and posture data of a sparse target three-dimensional point cloud and a photographing camera by utilizing a beam method adjustment, so as to obtain model coordinate values of the sparse target model three-dimensional point cloud and the position; and taking the sparse feature points as initial values, performing dense matching on the multi-view photos, and obtaining dense point cloud data. The process mainly comprises four steps: stereopair selection, depth map calculation, depth map optimization and depth map fusion. For each image in the input dataset, we select a reference image to form a stereopair for use in computing the depth map. We can thus get a rough depth map of all images, which may contain noise and errors, we use its neighborhood depth map for consistency checking to optimize the depth map for each image. And finally, carrying out depth map fusion to obtain the three-dimensional point cloud of the whole scene.
4. And (5) reconstructing the curved surface of the target object by utilizing the dense point cloud. The method comprises the steps of defining octree, setting function space, creating vector field, solving poisson equation and extracting equivalent surface. And obtaining an integral relation between the sampling points and the indication function according to the gradient relation, obtaining a vector field of the point cloud according to the integral relation, and calculating an approximation of the gradient field of the indication function to form a poisson equation. And (3) solving an approximate solution by using matrix iteration according to a poisson equation, extracting an equivalent surface by adopting a moving square algorithm, and reconstructing a model of the measured object for the measured point cloud.
5. Full-automatic texture mapping of object models. And after the surface model is constructed, texture mapping is carried out. The main process comprises the following steps: (1) texture data is obtained through a surface triangular mesh of an image reconstruction target; (2) and (5) reconstructing visibility analysis of the triangular surface of the model. Calculating a visible image set of each triangular surface and an optimal reference image by using calibration information of the images; (3) triangular face clustering generates texture patches. According to the visible image set of the triangular surface, the optimal reference image and the neighborhood topological relation of the triangular surface, clustering the triangular surface into a plurality of reference image texture patches; (4) the texture patches are automatically ordered to generate a texture image. And sequencing the generated texture patches according to the size relation of the texture patches to generate texture images with minimum surrounding areas, and obtaining texture mapping coordinates of each triangular surface.
Although the above embodiment describes the image acquisition device acquiring an image, it should not be construed as being applicable to a group of pictures constituted only by a single picture, which is merely an explanatory manner adopted for ease of understanding. The image acquisition device can also acquire video data, and the video data is directly utilized or images are intercepted from the video data to carry out 3D synthesis. However, the shooting positions of the corresponding frames or the truncated images of the video data utilized in the synthesis still satisfy the above empirical formula.
The target object, and the object each represent an object for which three-dimensional information is to be acquired. Can be a solid object or a plurality of object compositions. For example, it may be a building, bridge, etc. The three-dimensional information of the target object comprises a three-dimensional image, a three-dimensional point cloud, a three-dimensional grid, local three-dimensional features, three-dimensional dimensions and all parameters with the three-dimensional features of the target object. In the present invention, three-dimensional means having XYZ three-direction information, in particular, having depth information, which is essentially different from only two-dimensional plane information. Also in essence different from some definitions called three-dimensional, panoramic, holographic, three-dimensional, but actually only including two-dimensional information, in particular not including depth information.
The acquisition region in the present invention refers to a range that can be photographed by an image acquisition device (e.g., a camera). The image acquisition device in the invention can be CCD, CMOS, camera, video camera, industrial camera, monitor, camera, mobile phone, tablet, notebook, mobile terminal, wearable equipment, intelligent glasses, intelligent watch, intelligent bracelet and all equipment with image acquisition function.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in an apparatus in accordance with embodiments of the present invention may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present invention can also be implemented as an apparatus or device program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present invention may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
By now it should be appreciated by those skilled in the art that while a number of exemplary embodiments of the invention have been shown and described herein in detail, many other variations or modifications of the invention consistent with the principles of the invention may be directly ascertained or inferred from the present disclosure without departing from the spirit and scope of the invention. Accordingly, the scope of the present invention should be understood and deemed to cover all such other variations or modifications.

Claims (10)

1. A calibration method for 3D modeling of a remote target object based on continuous shooting is characterized by comprising the following steps of:
(1) Selecting a calibration object A which is arranged at a distance away from the target object B;
(2) Shooting a plurality of images of the calibration object A by using acquisition equipment;
(3) Moving and/or rotating the whole acquisition equipment, and continuously shooting until the target object B enters the field of view of the acquisition equipment;
(4) The acquisition equipment acquires a plurality of images of the target object B;
the calibration object is provided with a plurality of calibration points;
calibrating coordinates of the target object according to the coordinates of the plurality of calibration points, including: extracting feature points of all the photographed pictures, and matching the feature points to obtain model coordinate values of the object A and the object B; calibrating the absolute coordinates of the target object according to the absolute coordinates of the calibration points and the model coordinates;
in step (3), the acquisition device continuously acquires at certain time/space intervals during the movement, wherein the continuous acquisition should satisfy: the images P, Q, R acquired at the adjacent three acquisition positions should satisfy P n Q R non-null.
2. The method of claim 1, wherein: the acquisition equipment is 3D intelligent vision equipment and comprises an image acquisition device and a rotating device;
the rotating device is used for driving the acquisition area of the image acquisition device to generate relative motion with the target;
And the image acquisition device is used for acquiring a group of images of the target object through the relative motion.
3. The method of claim 2, wherein: the position of the image acquisition device when the image acquisition device rotates to acquire a group of images accords with the following conditions:
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length of the photosensitive element of the image acquisition device; m is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; μ is an empirical coefficient.
4. The method of claim 1, wherein: when the acquisition equipment is 3D intelligent image acquisition equipment, two adjacent acquisition positions of the 3D intelligent image acquisition equipment accord with the following conditions:
wherein L is the linear distance between the optical centers of the two adjacent acquisition position image acquisition devices; f is the focal length of the image acquisition device; d is the rectangular length or width of the photosensitive element of the image acquisition device; t is the distance from the photosensitive element of the image acquisition device to the surface of the target along the optical axis; delta is the adjustment coefficient.
5. The method of claim 1, wherein: extracting feature points of the acquired images, and matching the feature points to obtain sparse feature points; and inputting the matched characteristic point coordinates, and obtaining model coordinate values of the sparse model three-dimensional point clouds and the positions of the object A and the object B by utilizing the position and posture data of the resolving sparse three-dimensional point clouds and the photographed image acquisition equipment.
6. The method of claim 5, wherein: absolute coordinate X of a calibration point introduced onto a calibration object T 、Y T 、Z T And the picture template of the marked point is matched with all the input pictures to obtain the pixel row number x containing the marked point in the input pictures i 、y i
7. The method of claim 5, wherein: the method also comprises inputting the pixel row and column numbers x of the marked points according to the position and posture data of the photographing camera i 、y i Can calculate the coordinate (X) i 、Y i 、Z i );
According to the absolute coordinates (X) T 、Y T 、Z T ) And model coordinates (X) i 、Y i 、Z i ) And 7 space coordinate conversion parameters of the model coordinates and the absolute coordinates are calculated by using a space similarity transformation formula.
8. The method of claim 7, wherein: the method also comprises the step of converting coordinates of the three-dimensional point clouds of the object A and the object B and the position and posture data of the photographing camera into an absolute coordinate system by using the calculated 7 parameters, so that the real size of the target object is obtained.
9. The method of claim 1, wherein: the absolute size of the target is obtained.
10. The 3D model construction method is characterized by comprising the following steps of: use of a method according to any one of claims 1-9.
CN202110636162.4A 2020-03-16 2020-03-16 Calibration method for 3D modeling of remote target object based on continuous shooting Active CN113327291B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110636162.4A CN113327291B (en) 2020-03-16 2020-03-16 Calibration method for 3D modeling of remote target object based on continuous shooting

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010183304.1A CN111429523B (en) 2020-03-16 2020-03-16 Remote calibration method in 3D modeling
CN202110636162.4A CN113327291B (en) 2020-03-16 2020-03-16 Calibration method for 3D modeling of remote target object based on continuous shooting

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202010183304.1A Division CN111429523B (en) 2020-03-16 2020-03-16 Remote calibration method in 3D modeling

Publications (2)

Publication Number Publication Date
CN113327291A CN113327291A (en) 2021-08-31
CN113327291B true CN113327291B (en) 2024-03-22

Family

ID=71553523

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110636162.4A Active CN113327291B (en) 2020-03-16 2020-03-16 Calibration method for 3D modeling of remote target object based on continuous shooting
CN202010183304.1A Active CN111429523B (en) 2020-03-16 2020-03-16 Remote calibration method in 3D modeling

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010183304.1A Active CN111429523B (en) 2020-03-16 2020-03-16 Remote calibration method in 3D modeling

Country Status (2)

Country Link
CN (2) CN113327291B (en)
WO (1) WO2021185214A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113327291B (en) * 2020-03-16 2024-03-22 天目爱视(北京)科技有限公司 Calibration method for 3D modeling of remote target object based on continuous shooting
WO2022078417A1 (en) * 2020-10-15 2022-04-21 左忠斌 Rotatory intelligent visual 3d information collection device
CN112082486B (en) * 2020-10-15 2022-05-27 天目爱视(北京)科技有限公司 Handheld intelligent 3D information acquisition equipment
CN112303423B (en) * 2020-10-15 2022-10-25 天目爱视(北京)科技有限公司 Intelligent three-dimensional information acquisition equipment stable in rotation
CN112254673B (en) * 2020-10-15 2022-02-15 天目爱视(北京)科技有限公司 Self-rotation type intelligent vision 3D information acquisition equipment
CN112254669B (en) * 2020-10-15 2022-09-16 天目爱视(北京)科技有限公司 Intelligent visual 3D information acquisition equipment of many bias angles
CN112492292B (en) * 2020-11-27 2023-04-11 天目爱视(北京)科技有限公司 Intelligent visual 3D information acquisition equipment of free gesture
WO2024156022A1 (en) * 2023-01-24 2024-08-02 Visionary Machines Pty Ltd Systems and methods for calibrating cameras and camera arrays

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833791A (en) * 2010-05-11 2010-09-15 成都索贝数码科技股份有限公司 Scene modeling method under single camera and system
CN104346833A (en) * 2014-10-28 2015-02-11 燕山大学 Vehicle restructing algorithm based on monocular vision
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN105550670A (en) * 2016-01-27 2016-05-04 兰州理工大学 Target object dynamic tracking and measurement positioning method
CN105865326A (en) * 2015-01-21 2016-08-17 成都理想境界科技有限公司 Object size measurement method and image database data acquisition method
CN107194962A (en) * 2017-04-01 2017-09-22 深圳市速腾聚创科技有限公司 Point cloud and plane picture fusion method and device
CN108288291A (en) * 2018-06-07 2018-07-17 北京轻威科技有限责任公司 Polyphaser calibration based on single-point calibration object
CN109035379A (en) * 2018-09-10 2018-12-18 天目爱视(北京)科技有限公司 A kind of 360 ° of 3D measurements of object and information acquisition device
CN109242898A (en) * 2018-08-30 2019-01-18 华强方特(深圳)电影有限公司 A kind of three-dimensional modeling method and system based on image sequence
CN208653473U (en) * 2018-09-05 2019-03-26 天目爱视(北京)科技有限公司 Image capture device, 3D information comparison device, mating object generating means
CN109801302A (en) * 2018-12-14 2019-05-24 华南理工大学 A kind of ultra-high-tension power transmission line foreign matter detecting method based on binocular vision
WO2019127508A1 (en) * 2017-12-29 2019-07-04 深圳配天智能技术研究院有限公司 Smart terminal and 3d imaging method and 3d imaging system therefor
CN110288713A (en) * 2019-07-03 2019-09-27 北京机械设备研究所 A kind of quick three-dimensional model reconstruction method and system based on multi-vision visual
CN110443853A (en) * 2019-07-19 2019-11-12 广东虚拟现实科技有限公司 Scaling method, device, terminal device and storage medium based on binocular camera
CN110503694A (en) * 2019-08-08 2019-11-26 Oppo广东移动通信有限公司 Multi-camera calibration, device, storage medium and electronic equipment
CN110675455A (en) * 2019-08-30 2020-01-10 的卢技术有限公司 Self-calibration method and system for car body all-around camera based on natural scene

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE518620C2 (en) * 2000-11-16 2002-10-29 Ericsson Telefon Ab L M Stage construction and camera calibration with robust use of "cheers"
US8199194B2 (en) * 2008-10-07 2012-06-12 The Boeing Company Method and system involving controlling a video camera to track a movable target object
JP2013501304A (en) * 2009-08-04 2013-01-10 アイキュー ビジョン テクノロジーズ リミテッド System and method for object extraction
CN102661717A (en) * 2012-05-09 2012-09-12 河北省电力建设调整试验所 Monocular vision measuring method for iron tower
CN102867414B (en) * 2012-08-18 2014-12-10 湖南大学 Vehicle queue length measurement method based on PTZ (Pan/Tilt/Zoom) camera fast calibration
CN103617649B (en) * 2013-11-05 2016-05-11 北京江宜科技有限公司 A kind of river model topographic survey method based on Camera Self-Calibration technology
CN103868460B (en) * 2014-03-13 2016-10-05 桂林电子科技大学 Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN104299261B (en) * 2014-09-10 2017-01-25 深圳大学 Three-dimensional imaging method and system for human body
CN104316335B (en) * 2014-11-19 2017-01-18 烟台开发区海德科技有限公司 3D automobile wheel positioner multi-camera calibration system and method
CN105046715B (en) * 2015-09-16 2019-01-11 北京理工大学 A kind of line-scan digital camera scaling method based on interspace analytic geometry
CN107578464B (en) * 2017-06-30 2021-01-29 长沙湘计海盾科技有限公司 Conveyor belt workpiece three-dimensional contour measuring method based on line laser scanning
CN107977996B (en) * 2017-10-20 2019-12-10 西安电子科技大学 Spatial Target Localization Method Based on Target Calibration Localization Model
CN207556477U (en) * 2017-12-20 2018-06-29 北京卓立汉光仪器有限公司 Surface appearance measuring device
CN108470149A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 A kind of 3D 4 D datas acquisition method and device based on light-field camera
CN109146949B (en) * 2018-09-05 2019-10-22 天目爱视(北京)科技有限公司 A kind of 3D measurement and information acquisition device based on video data
CN109903327B (en) * 2019-03-04 2021-08-31 西安电子科技大学 A target size measurement method for sparse point cloud
CN110428494A (en) * 2019-07-25 2019-11-08 螳螂慧视科技有限公司 Processing method, equipment and the system of three-dimensional modeling
CN110763152B (en) * 2019-10-09 2021-08-20 哈尔滨工程大学 An underwater active rotating structured light three-dimensional vision measurement device and measurement method
CN113327291B (en) * 2020-03-16 2024-03-22 天目爱视(北京)科技有限公司 Calibration method for 3D modeling of remote target object based on continuous shooting

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833791A (en) * 2010-05-11 2010-09-15 成都索贝数码科技股份有限公司 Scene modeling method under single camera and system
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN104346833A (en) * 2014-10-28 2015-02-11 燕山大学 Vehicle restructing algorithm based on monocular vision
CN105865326A (en) * 2015-01-21 2016-08-17 成都理想境界科技有限公司 Object size measurement method and image database data acquisition method
CN105550670A (en) * 2016-01-27 2016-05-04 兰州理工大学 Target object dynamic tracking and measurement positioning method
CN107194962A (en) * 2017-04-01 2017-09-22 深圳市速腾聚创科技有限公司 Point cloud and plane picture fusion method and device
WO2019127508A1 (en) * 2017-12-29 2019-07-04 深圳配天智能技术研究院有限公司 Smart terminal and 3d imaging method and 3d imaging system therefor
CN108288291A (en) * 2018-06-07 2018-07-17 北京轻威科技有限责任公司 Polyphaser calibration based on single-point calibration object
CN109242898A (en) * 2018-08-30 2019-01-18 华强方特(深圳)电影有限公司 A kind of three-dimensional modeling method and system based on image sequence
CN208653473U (en) * 2018-09-05 2019-03-26 天目爱视(北京)科技有限公司 Image capture device, 3D information comparison device, mating object generating means
CN109035379A (en) * 2018-09-10 2018-12-18 天目爱视(北京)科技有限公司 A kind of 360 ° of 3D measurements of object and information acquisition device
CN109801302A (en) * 2018-12-14 2019-05-24 华南理工大学 A kind of ultra-high-tension power transmission line foreign matter detecting method based on binocular vision
CN110288713A (en) * 2019-07-03 2019-09-27 北京机械设备研究所 A kind of quick three-dimensional model reconstruction method and system based on multi-vision visual
CN110443853A (en) * 2019-07-19 2019-11-12 广东虚拟现实科技有限公司 Scaling method, device, terminal device and storage medium based on binocular camera
CN110503694A (en) * 2019-08-08 2019-11-26 Oppo广东移动通信有限公司 Multi-camera calibration, device, storage medium and electronic equipment
CN110675455A (en) * 2019-08-30 2020-01-10 的卢技术有限公司 Self-calibration method and system for car body all-around camera based on natural scene

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
GPS双目摄像机标定及空间坐标重建;孔筱芳;陈钱;顾国华;钱惟贤;任侃;;光学精密工程;20170215(第02期);全文 *
变形车辆自标定三维重建的建模研究;鲁光泉;李一兵;黄山;;汽车工程(第02期);全文 *
基于单目视觉的目标物高度测量;钱鹰;张梦;;计算机工程与设计(第03期);全文 *
基于双目立体视觉的三维建模算法;王彦霞;王震洲;刘教民;;河北科技大学学报(第03期);全文 *
用单数码相机实现物体表面的三维重建;张勇;金学波;;计算机工程与设计;20080616(第11期);全文 *

Also Published As

Publication number Publication date
CN111429523B (en) 2021-06-15
CN111429523A (en) 2020-07-17
CN113327291A (en) 2021-08-31
WO2021185214A1 (en) 2021-09-23

Similar Documents

Publication Publication Date Title
CN113379822B (en) Method for acquiring 3D information of target object based on pose information of acquisition equipment
CN113532329B (en) Calibration method with projected light spot as calibration point
CN113327291B (en) Calibration method for 3D modeling of remote target object based on continuous shooting
CN111060023B (en) High-precision 3D information acquisition equipment and method
CN111238374B (en) Three-dimensional model construction and measurement method based on coordinate measurement
CN111292364B (en) Method for rapidly matching images in three-dimensional model construction process
CN111445529B (en) Calibration equipment and method based on multi-laser ranging
CN113066132B (en) 3D modeling calibration method based on multi-equipment acquisition
CN111292239B (en) Three-dimensional model splicing equipment and method
CN112016570B (en) Three-dimensional model generation method for background plate synchronous rotation acquisition
CN111076674B (en) Closely target object 3D collection equipment
CN111060008B (en) 3D intelligent vision equipment
CN112254670B (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
CN111768486A (en) Method and system for 3D reconstruction of monocular camera based on rotating refractor
CN111340959B (en) Three-dimensional model seamless texture mapping method based on histogram matching
CN113538552B (en) 3D information synthetic image matching method based on image sorting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant