[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106846469B - Based on tracing characteristic points by the method and apparatus of focusing storehouse reconstruct three-dimensional scenic - Google Patents

Based on tracing characteristic points by the method and apparatus of focusing storehouse reconstruct three-dimensional scenic Download PDF

Info

Publication number
CN106846469B
CN106846469B CN201710091014.2A CN201710091014A CN106846469B CN 106846469 B CN106846469 B CN 106846469B CN 201710091014 A CN201710091014 A CN 201710091014A CN 106846469 B CN106846469 B CN 106846469B
Authority
CN
China
Prior art keywords
stack
dimensional scene
feature point
dimensional
focusing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710091014.2A
Other languages
Chinese (zh)
Other versions
CN106846469A (en
Inventor
刘畅
邱钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Information Science and Technology University
Original Assignee
Beijing Information Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Information Science and Technology University filed Critical Beijing Information Science and Technology University
Publication of CN106846469A publication Critical patent/CN106846469A/en
Application granted granted Critical
Publication of CN106846469B publication Critical patent/CN106846469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of based on tracing characteristic points by the method and apparatus of focusing storehouse reconstruct three-dimensional scenic, comprising: establishes the forward model for being generated by three-dimensional scenic and focusing storehouse, provides three-dimensional scenic characteristic point and focus the geometrical relationship of storehouse;The characteristic point for focusing each image in storehouse is extracted, the coordinate of the characteristic point of successful match is tracked, characteristic point is obtained and is focusing the track in storehouse;It establishes by the inverse model of focusing storehouse reconstruct three-dimensional scenic: by the characteristic point of successful match, establishing the equation group about characteristic point three-dimensional coordinate, the three-dimensional coordinate of characteristic point is obtained by solve system of equation, reconstructs three-dimensional scenic, and realize that three-dimensional geometry measures.Focusing storehouse of the invention is to fix detector, by the acquisition for completing focusing storehouse along optical axis mobile lens, using the solution of the present invention, the three-dimensionalreconstruction under camera shooting visual angle can be realized, can provide accurate three-dimensional structure information for virtual reality and dimensional measurement.

Description

Method and device for reconstructing three-dimensional scene by focusing stack based on feature point tracking
Technical Field
The invention relates to the field of computer vision and digital image processing, in particular to a method and a device for reconstructing a three-dimensional scene by a focusing stack based on feature point tracking.
Background
The traditional camera takes a picture of the sum of the light intensity of light emitted from a three-dimensional scene on a two-dimensional detector through a lens, which results in compression of light direction information and scene depth information. Computational photography is a new field for researching three-dimensional imaging emerging in recent years, and a focus stack is one of the methods for reconstructing three-dimensional scenes in the computational photography. The focusing stack is a group of imaging sequences which focus on different imaging planes or shoot by adopting different parameters, and has abundant three-dimensional information. In computer vision, reconstruction and geometric measurements of three-dimensional scenes are made primarily in the degree of defocus and focus of the focus stack. The existing method of reconstructing a three-dimensional scene from a focus stack is a reconstruction method based on a focus measure (also referred to as a zoom method), which requires the calculation of a focus measure for each image of the focus stack.
Disclosure of Invention
The invention aims to provide a method and a device for reconstructing a three-dimensional scene by a focusing stack based on feature point tracking, which provide an effective means for reconstructing the three-dimensional scene by the focusing stack by adopting the feature point tracking and are different from a method for reconstructing the three-dimensional scene by multi-view vision.
To achieve the above object, the present invention provides a method for reconstructing a three-dimensional scene from a focus stack based on feature point tracking, the method comprising: establishing a forward modeling of generating a focusing stack by a three-dimensional scene to obtain the relation between the characteristic point of the three-dimensional scene and the focusing stack; extracting the feature points of each image from the focusing stack, and generating a forward model of the focusing stack according to the three-dimensional scene to track the coordinates of the feature points successfully matched in the focusing stack; and establishing a three-dimensional coordinate of the successfully matched feature point and an equation set of the three-dimensional scene, solving the equation set to obtain the three-dimensional coordinate of the successfully matched feature point, and reconstructing the three-dimensional scene.
Further, the established forward modeling is as follows:
the relationship between the three-dimensional scene feature points and the focusing stack is as follows:
wherein,generating a forward model of a focused stack for a three-dimensional scene, (S)x,Sy) Is the object plane of the scene, f (S)x,Sy) Is (S)x,Sy) Corresponding irradiance, (x, y) is the nth imaging plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1To focus the distance of the 1 st imaging lens of the stack to the detector,is (S)x,Sy) The expansion function at the imaging plane (x, y).
Further, the establishing of the three-dimensional coordinates of the successfully matched feature points and the equation set of the three-dimensional scene, and solving of the equation set to obtain the three-dimensional coordinates of the successfully matched feature points specifically include: establishing an equation set of the feature point abscissa successfully matched and the depth map of the three-dimensional scene, and solving a least square problem to obtain the feature point abscissa successfully matched; and establishing an equation set of the successfully matched feature point ordinate and the depth map of the three-dimensional scene, and solving a least square problem to obtain the successfully matched feature point ordinate.
Further, the system of equations for the feature point abscissa and the depth map of the three-dimensional scene:
wherein (S)x,Sy) Is the object plane of the scene, (x, y) is the nth imaging plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1For focusing the 1 st imaging lens of the stack onto the detectorThe distance between the first and second electrodes,is the coordinate of the feature point at the nth imaging plane of the focus stack.
Further, the system of equations of the feature point ordinate and the depth map of the three-dimensional scene is as follows:
wherein (S)x,Sy) Is the object plane of the scene, (x, y) is the nth imaging plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1To focus the distance of the 1 st imaging lens of the stack to the detector,is the coordinate of the feature point at the nth imaging plane of the focus stack.
The invention also provides a device for reconstructing a three-dimensional scene from a focus stack based on feature point tracking, the device comprising: the construction module is used for establishing a forward model for generating a focusing stack by a three-dimensional scene and giving a relation between a three-dimensional scene characteristic point and the focusing stack; the acquisition module is used for extracting the feature points of each image from the focusing stack and tracking the feature points successfully matched in the focusing stack; and the reconstruction module is used for establishing a three-dimensional coordinate of the successfully matched feature point and an equation set of the three-dimensional scene according to the successfully matched feature point acquired by the acquisition module, solving the equation set to obtain the three-dimensional coordinate of the successfully matched feature point, and reconstructing the three-dimensional scene.
Further, the forward modeling established by the construction module is as follows:
the relationship between the three-dimensional scene feature points and the focusing stack is as follows:
wherein,generating a forward model of a focused stack for a three-dimensional scene, (S)x,Sy) Is the object plane of the scene, f (S)x,Sy) Is (S)x,Sy) Corresponding irradiance, (x, y) is the nth imaging plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1To focus the distance of the 1 st imaging lens of the stack to the detector,is (S)x,Sy) The expansion function at the imaging plane (x, y).
Further, the reconstruction module specifically includes: the first relation unit is used for establishing an equation set of the feature point abscissa successfully matched and the depth map of the three-dimensional scene, and obtaining the feature point abscissa successfully matched by solving a least square problem; the second relation unit is used for establishing an equation set of the successfully matched feature point vertical coordinate and the depth map of the three-dimensional scene, and obtaining the successfully matched feature point vertical coordinate by solving a least square problem; and the reconstruction unit is used for reconstructing a three-dimensional scene according to the horizontal coordinate of the successfully matched feature point obtained by the first relation unit and the vertical coordinate of the successfully matched feature point obtained by the second relation unit.
Further, the system of equations for the feature point abscissa and the depth map of the three-dimensional scene:
wherein (S)x,Sy) Is the object plane of the scene, (x, y) is the nth imaging plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1To focus the distance of the 1 st imaging lens of the stack to the detector,is the coordinate of the feature point at the nth imaging plane of the focus stack.
Further, the system of equations of the feature point ordinate and the depth map of the three-dimensional scene is as follows:
wherein (S)x,Sy) Is the object plane of the scene, (x, y) is the nth imaging plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1To focus the distance of the 1 st imaging lens of the stack to the detector,is the coordinate of the feature point at the nth imaging plane of the focus stack.
According to the scheme provided by the invention, the successfully matched feature points in the focusing stack are tracked by establishing the relationship between the three-dimensional scene feature points and the focusing stack, and the tracks of the feature points in the focusing stack are tracked, so that the three-dimensional scene reconstruction under the camera shooting field of view is realized, and accurate three-dimensional structure information can be provided for virtual reality and geometric measurement.
Drawings
Fig. 1 is a flowchart illustrating a method for reconstructing a three-dimensional scene from a focus stack based on feature point tracking according to a first embodiment of the present invention.
Fig. 2 is a schematic diagram of generating a focus stack according to a first embodiment of the present invention.
Fig. 3 is a schematic diagram of an nth imaging process of a focusing stack according to a first embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an apparatus for reconstructing a three-dimensional scene from a focus stack based on feature point tracking according to a second embodiment of the present invention.
Detailed Description
In the drawings, the same or similar reference numerals are used to denote the same or similar elements or elements having the same or similar functions. Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
In the description of the present invention, the terms "central", "longitudinal", "lateral", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and therefore, should not be construed as limiting the scope of the present invention.
As shown in fig. 1, the method for reconstructing a three-dimensional scene from a focus stack based on feature point tracking according to the present embodiment includes:
step 101, establishing a forward model for generating a focusing stack by a three-dimensional scene to obtain the relationship between the three-dimensional scene characteristic point and the focusing stack.
In this step, the focusing stack is a set of imaging sequences focused on different imaging planes or photographed using different parameters, and the focusing stack in this embodiment is a set of imaging sequences focused on different imaging planes. The present embodiment explains the establishment of a forward modeling by taking an example in which a lens is moved along an optical axis while a detector is fixed to complete focusing stack acquisition. The detector captures a sequence of images of an object (the object is composed of a plurality of object points) in the scene focused on different imaging planes, and the image sequences form a focusing stack. Because the focusing stack images a plurality of imaging planes of a three-dimensional scene under a certain field angle, the three-dimensional scene reconstruction under a camera shooting field angle (FOV) can be realized, and accurate three-dimensional structure information can be provided for virtual reality and geometric measurement. Of course, other methods of collecting the focus stack are also contemplated by the present invention.
Considering a three-dimensional scene under the camera' S field of view, one can adopt (f (S)x,Sy),depth(Sx,Sy) Describe a three-dimensional scene. Wherein (S)x,Sy) Is the object plane of the scene. f (S)x,Sy) Is (S)x,Sy) The corresponding irradiance. depth (S)x,Sy) Is a depth map of a three-dimensional scene.
Fig. 2 is a schematic diagram of the present embodiment forming a focusing stack, as shown in fig. 2, where the left side of fig. 2 represents a scene, the right side represents a detector, and the middle represents a lens. The detector is fixed and the lens is moved along the optical axis as illustrated in fig. 2 as the 1 st image … … nth image. As the lens moves along the optical axis, the object plane of the scene corresponding to the detector also moves along the optical axis accordingly, as illustrated by the 1 st object plane … … nth object plane in fig. 2.
FIG. 3 is a schematic diagram of the nth imaging process of the focus stack, as shown in FIG. 3, (S)x,Sy) Is the object plane of the scene, (x, y) is the nth imaging plane, (S)x,Sy) And the dotted line between (x, y) and (x, y) is the plane of the nth-order imaging lens. dnImaging lens to object plane for n-th time of focusing stack (S)x,Sy) Distance of d'nIs the distance, d ', from the detector of the ith imaging lens of the focusing stack'1Distance from the 1 st imaging lens of the focus stack to the detector.
According to the lens imaging principle and by utilizing a point spread function to express the lens imaging process, the forward modeling of generating a focusing stack by a three-dimensional scene is established as follows:
wherein,is (S)x,Sy) Point spread function on the imaging plane (x, y). x and y are the abscissa and ordinate of the imaging plane, and the physical meaning is (S)x,Sy) The position coordinates of the point on the imaging plane.
In this embodiment, in order to simplify the model, it is considered that the aperture corresponding to the lens diaphragm is approximately a pinhole, and the imaging process is regarded as pinhole imaging, so that the model can be simplified and the calculation is simple, and therefore, the point spread functionFor the impulse function delta function, the forward modeling described above is combined:
the relationship between the three-dimensional scene feature points and the focusing stack is obtained as follows:
further, three-dimensional scene feature points (S) are obtainedx,Sy) Trajectory equations in the focus stack.
And 102, extracting all feature points in each image from the forward modeling focus stack described in the step 101, and generating the forward modeling focus stack according to the three-dimensional scene to track the coordinates of the feature points successfully matched.
In practical applications, SIFT (Scale-invariant feature transform), orb (ordered brief), SURF may be used to extract and match feature points. Matching means that the same object point is imaged at different positions in different focused stack pictures, and the matching is to find the points.
In specific implementation, considering that the SIFT maintains strong robustness to the view angle change and noise of the image, and can process the matching problem under the conditions of translation, rotation and affine transformation among the images, the embodiment adopts the SIFT to execute the image matching of the focusing stack, and obtains the feature points of the successfully matched image.
In this step, all feature points in each image are extracted from the focus stack obtained in step 101: feature _1, feature _2, … and feature _ M, tracking all successfully matched feature point coordinates in the focusing stack, wherein the coordinates of the feature point feature _ M on the nth imaging plane of the focusing stack are
Step 103: and establishing an inversion model for reconstructing the three-dimensional scene by the focusing stack to obtain the three-dimensional coordinates of the successfully matched characteristic points, and reconstructing the three-dimensional scene.
Establishing an inversion model of reconstructing a three-dimensional scene by a focusing stack refers to establishing a three-dimensional coordinate of a feature point and an equation set of the three-dimensional scene which are successfully matched.
The method comprises the following steps:
establishing an abscissa S of the feature point for the feature point feature _ m successfully matchedxAnd depth map depth (S) of the three-dimensional scenex,Sy) The system of equations of (1):
establishing a longitudinal coordinate S about the feature pointyAnd depth map depth (S) of the three-dimensional scenex,Sy) The system of equations of (1):
with respect to the feature point abscissa SxAnd depth map depth (S) of the three-dimensional scenex,Sy) Expressed as AX ═ b1,X=(Sx,depth(Sx,Sy))TCan be realized by solving the least square problem min | | AX | | non-woven phosphor2To obtain SxAnd depth (S)x,Sy). Also with respect to the feature point ordinate SyAnd depth (S)x,Sy) Expressed as AY ═ b2,Y=(Sy,depth(Sx,Sy))TCan be realized by solving the least square problem of min | | | AY | | non-woven phosphor2To obtain Sy
In practical application, a three-dimensional scene under a camera shooting field of view (FOV) is generated to comprise a three-dimensional point cloud picture or a three-dimensional rendering picture.
The method further comprises the following steps: and measuring the distance between the characteristic points to provide the geometric ranging of the three-dimensional scene.
According to the method for reconstructing the three-dimensional scene by the focusing stack based on the feature point tracking, the feature points successfully matched in the focusing stack are tracked by constructing the relationship between the feature points of the three-dimensional scene and the focusing stack, and the three-dimensional coordinates of the feature points successfully matched are calculated, so that the reconstruction of the three-dimensional scene under the camera shooting field of view is realized, and accurate three-dimensional structure information can be provided for virtual reality and geometric measurement.
Referring to fig. 4, an embodiment of the present invention provides an apparatus for reconstructing a three-dimensional scene from a focus stack based on feature point tracking, the apparatus including:
the building module 201 is configured to build a forward model for generating a focus stack from a three-dimensional scene, and provide a relationship between a feature point of the three-dimensional scene and the focus stack;
an obtaining module 202, configured to extract feature points of each image from a focused stack, and track feature points successfully matched in the focused stack;
the reconstructing module 203 is configured to establish a three-dimensional coordinate of the successfully matched feature point and an equation set of the three-dimensional scene according to the successfully matched feature point acquired by the acquiring module 202, solve the equation set to obtain the three-dimensional coordinate of the successfully matched feature point, and reconstruct the three-dimensional scene.
In the above embodiment, the forward modeling model established by the building module 201 is:
the relationship between the three-dimensional scene feature points and the focusing stack is as follows:
wherein (S)x,Sy) Is the object plane of the scene, f (S)x,Sy) Is (S)x,Sy) Corresponding irradiance, depth (S)x,Sy) As a depth map of a three-dimensional scene, dnFocusing the nth imaging lens of the stack onto the object plane (S)x,Sy) Distance of d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1Distance from the 1 st imaging lens of the focus stack to the detector.
The acquisition module 202 extracts feature points of each image from the focus stack using SIFT, ORB, or SURF methods.
The reconstruction module 203 specifically includes:
the first relation unit is used for establishing an equation set of the feature point abscissa successfully matched and the depth map of the three-dimensional scene, and obtaining the feature point abscissa successfully matched by solving a least square problem;
the second relation unit is used for establishing an equation set of the successfully matched feature point vertical coordinate and the depth map of the three-dimensional scene, and obtaining the successfully matched feature point vertical coordinate by solving a least square problem;
and the reconstruction unit is used for reconstructing a three-dimensional scene according to the horizontal coordinate of the successfully matched feature point obtained by the first relation unit and the vertical coordinate of the successfully matched feature point obtained by the second relation unit.
The equation set of the feature point abscissa and the depth map of the three-dimensional scene obtained by the first relation unit is as follows:
the system of equations of the feature point ordinate and the depth map of the three-dimensional scene obtained by the second relation unit is as follows:
in the embodiment of the present invention, the three-dimensional scene reconstructed by the reconstruction module 203 includes a three-dimensional point cloud image or a three-dimensional rendering image.
According to the device for reconstructing the three-dimensional scene by the focusing stack based on the feature point tracking, the relationship between the feature points of the three-dimensional scene and the focusing stack is established through the establishing module, the feature points successfully matched in the focusing stack are tracked, the three-dimensional coordinates of the feature points successfully matched are calculated, the reconstruction of the three-dimensional scene under the camera shooting field of view is further realized, and accurate three-dimensional structure information can be provided for virtual reality and geometric measurement.
Finally, it should be pointed out that: the above examples are only for illustrating the technical solutions of the present invention, and are not limited thereto. Those of ordinary skill in the art will understand that: modifications can be made to the technical solutions described in the foregoing embodiments, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. A method for reconstructing a three-dimensional scene from a focus stack based on feature point tracking, the method comprising:
establishing a forward modeling of generating a focusing stack by a three-dimensional scene to obtain the relation between the characteristic point of the three-dimensional scene and the focusing stack;
extracting the feature points of each image from the focusing stack, and generating a forward model of the focusing stack according to the three-dimensional scene to track the coordinates of the feature points successfully matched in the focusing stack;
establishing a three-dimensional coordinate of the successfully matched feature points and an equation set of the three-dimensional scene, solving the equation set to obtain the three-dimensional coordinate of the successfully matched feature points, and reconstructing the three-dimensional scene;
the established forward modeling is as follows:
the relationship between the three-dimensional scene feature points and the focusing stack is as follows:
wherein,generating a forward model of a focused stack for a three-dimensional scene, (S)x,Sy) Is the object plane of the scene, f (S)x,Sy) Is (S)x,Sy) Corresponding irradiance, (x, y) is the nth imaging plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1To focus the distance of the 1 st imaging lens of the stack to the detector,is (S)x,Sy) The expansion function at the imaging plane (x, y).
2. The method according to claim 1, wherein the establishing of the three-dimensional coordinates of the successfully matched feature points and the equation set of the three-dimensional scene, and the solving of the equation set to obtain the three-dimensional coordinates of the successfully matched feature points, specifically comprises:
establishing an equation set of the feature point abscissa successfully matched and the depth map of the three-dimensional scene, and solving a least square problem to obtain the feature point abscissa successfully matched;
and establishing an equation set of the successfully matched feature point ordinate and the depth map of the three-dimensional scene, and solving a least square problem to obtain the successfully matched feature point ordinate.
3. The method of claim 2, wherein the system of equations for the feature point abscissa and the depth map of the three-dimensional scene:
wherein (S)x,Sy) Is the object plane of the scene, (x, y) is the nth imaging plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1To focus the distance of the 1 st imaging lens of the stack to the detector,is the coordinate of the feature point at the nth imaging plane of the focus stack.
4. The method of claim 2, wherein the system of equations for the feature point ordinate and the depth map of the three-dimensional scene is:
wherein (S)x,Sy) Is the object plane of the scene, (x, y) is the nth imaging plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1To focus the distance of the 1 st imaging lens of the stack to the detector,is the coordinate of the feature point at the nth imaging plane of the focus stack.
5. An apparatus for reconstructing a three-dimensional scene from a focus stack based on feature point tracking, the apparatus comprising:
the construction module is used for establishing a forward model for generating a focusing stack by a three-dimensional scene and giving a relation between a three-dimensional scene characteristic point and the focusing stack;
the acquisition module is used for extracting the feature points of each image from the focusing stack and tracking the feature points successfully matched in the focusing stack;
the reconstruction module is used for establishing a three-dimensional coordinate of the successfully matched feature point and an equation set of the three-dimensional scene according to the successfully matched feature point acquired by the acquisition module, solving the equation set to obtain the three-dimensional coordinate of the successfully matched feature point, and reconstructing the three-dimensional scene;
the forward modeling established by the construction module is as follows:
the relationship between the three-dimensional scene feature points and the focusing stack is as follows:
wherein,generating a forward model of a focused stack for a three-dimensional scene, (S)x,Sy) Is the object plane of the scene, f (S)x,Sy) Is (S)x,Sy) Corresponding irradiance, (x, y) is the nth imaging plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1To focus the distance of the 1 st imaging lens of the stack to the detector,is (S)x,Sy) The expansion function at the imaging plane (x, y).
6. The apparatus of claim 5, wherein the reconstruction module specifically comprises:
the first relation unit is used for establishing an equation set of the feature point abscissa successfully matched and the depth map of the three-dimensional scene, and obtaining the feature point abscissa successfully matched by solving a least square problem;
the second relation unit is used for establishing an equation set of the successfully matched feature point vertical coordinate and the depth map of the three-dimensional scene, and obtaining the successfully matched feature point vertical coordinate by solving a least square problem;
and the reconstruction unit is used for reconstructing a three-dimensional scene according to the horizontal coordinate of the successfully matched feature point obtained by the first relation unit and the vertical coordinate of the successfully matched feature point obtained by the second relation unit.
7. The apparatus of claim 5, wherein the system of equations for the feature point abscissa and the depth map of the three-dimensional scene:
wherein (S)x,Sy) Is the object plane of the scene, (x, y) is the nth imaging plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1To focus the distance of the 1 st imaging lens of the stack to the detector,is the coordinate of the feature point at the nth imaging plane of the focus stack.
8. The apparatus of claim 5, wherein the system of equations for the feature point ordinate and the depth map of the three-dimensional scene is:
wherein (S)x,Sy) Is the object plane of the scene,(x, y) is the nth image forming plane, depth (S)x,Sy) Is a depth map of a three-dimensional scene, d'nIs the distance from the nth imaging lens of the focusing stack to the detector, d'1To focus the distance of the 1 st imaging lens of the stack to the detector,is the coordinate of the feature point at the nth imaging plane of the focus stack.
CN201710091014.2A 2016-12-14 2017-02-20 Based on tracing characteristic points by the method and apparatus of focusing storehouse reconstruct three-dimensional scenic Active CN106846469B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611154752 2016-12-14
CN2016111547529 2016-12-14

Publications (2)

Publication Number Publication Date
CN106846469A CN106846469A (en) 2017-06-13
CN106846469B true CN106846469B (en) 2019-12-03

Family

ID=59128734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710091014.2A Active CN106846469B (en) 2016-12-14 2017-02-20 Based on tracing characteristic points by the method and apparatus of focusing storehouse reconstruct three-dimensional scenic

Country Status (1)

Country Link
CN (1) CN106846469B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034490B (en) * 2021-04-16 2023-10-10 北京石油化工学院 Stacking safety distance monitoring method for chemical warehouse

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101432776A (en) * 2006-05-05 2009-05-13 汤姆森特许公司 System and method for three-dimensional object reconstruction from two-dimensional images
KR100983548B1 (en) * 2010-04-12 2010-09-24 이화여자대학교 산학협력단 A 3d shape reconstruction method considering point spread function of a microscope
CN103440679A (en) * 2013-08-09 2013-12-11 中国科学院苏州生物医学工程技术研究所 Three-dimensional reconstruction method for confocal laser scanning microscopic image
CN104463949A (en) * 2014-10-24 2015-03-25 郑州大学 Rapid three-dimensional reconstruction method and system based on light field digit refocusing
CN105023275A (en) * 2015-07-14 2015-11-04 清华大学 Super-resolution light field acquisition device and three-dimensional reconstruction method thereof
CN105279789A (en) * 2015-11-18 2016-01-27 中国兵器工业计算机应用技术研究所 A three-dimensional reconstruction method based on image sequences

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101432776A (en) * 2006-05-05 2009-05-13 汤姆森特许公司 System and method for three-dimensional object reconstruction from two-dimensional images
KR100983548B1 (en) * 2010-04-12 2010-09-24 이화여자대학교 산학협력단 A 3d shape reconstruction method considering point spread function of a microscope
CN103440679A (en) * 2013-08-09 2013-12-11 中国科学院苏州生物医学工程技术研究所 Three-dimensional reconstruction method for confocal laser scanning microscopic image
CN104463949A (en) * 2014-10-24 2015-03-25 郑州大学 Rapid three-dimensional reconstruction method and system based on light field digit refocusing
CN105023275A (en) * 2015-07-14 2015-11-04 清华大学 Super-resolution light field acquisition device and three-dimensional reconstruction method thereof
CN105279789A (en) * 2015-11-18 2016-01-27 中国兵器工业计算机应用技术研究所 A three-dimensional reconstruction method based on image sequences

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Plant Three-dimensional Information Detection Based on Single Moving Camera;Dongbo Zhang 等;《International Conference on Computer, Communications and Information Technology》;20141231;全文 *
基于4D光场数据的深度信息获取;陈佃文;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160815(第8期);全文 *
基于光场相机的超分辨焦点堆栈的研究;赵兴荣 等;《计算机测量与控制》;20141231;第22卷(第3期);全文 *

Also Published As

Publication number Publication date
CN106846469A (en) 2017-06-13

Similar Documents

Publication Publication Date Title
CN109003311B (en) Calibration method of fisheye lens
CN101630406B (en) Camera calibration method and camera calibration device
CN109035394B (en) Face three-dimensional model reconstruction method, device, equipment and system and mobile terminal
JP2019532451A (en) Apparatus and method for obtaining distance information from viewpoint
KR20180054487A (en) Method and device for processing dvs events
CN105005964B (en) Geographic scenes panorama sketch rapid generation based on video sequence image
JP6174104B2 (en) Method, apparatus and system for generating indoor 2D plan view
CN110009672A (en) Promote ToF depth image processing method, 3D rendering imaging method and electronic equipment
CN202075794U (en) Unmanned plane aerial photography three-dimensional imaging processing device
CN105184784B (en) The method that monocular camera based on movable information obtains depth information
CN206563985U (en) 3-D imaging system
CN111080774B (en) Method and system for reconstructing light field by applying depth sampling
WO2018032841A1 (en) Method, device and system for drawing three-dimensional image
JP2024537798A (en) Photographing and measuring method, device, equipment and storage medium
CN107454377A (en) A kind of algorithm and system that three-dimensional imaging is carried out using camera
CN105809664B (en) Method and device for generating three-dimensional image
CN103500471A (en) Method for realizing high-resolution augmented reality system
JP7300895B2 (en) Image processing device, image processing method, program, and storage medium
CN106846469B (en) Based on tracing characteristic points by the method and apparatus of focusing storehouse reconstruct three-dimensional scenic
CN110012236A (en) A kind of information processing method, device, equipment and computer storage medium
Wang et al. A study on camera array and its applications
CN107240149A (en) Object dimensional model building method based on image procossing
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method
CN104463958A (en) Three-dimensional super-resolution method based on disparity map fusing
CN105578173A (en) Rapid three-dimensional space projection and camera shooting visual identification system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant