CN110751601A - Distortion correction method based on RC optical system - Google Patents
Distortion correction method based on RC optical system Download PDFInfo
- Publication number
- CN110751601A CN110751601A CN201910850304.XA CN201910850304A CN110751601A CN 110751601 A CN110751601 A CN 110751601A CN 201910850304 A CN201910850304 A CN 201910850304A CN 110751601 A CN110751601 A CN 110751601A
- Authority
- CN
- China
- Prior art keywords
- point
- target
- camera
- focal plane
- distortion correction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012937 correction Methods 0.000 title claims abstract description 76
- 238000000034 method Methods 0.000 title claims abstract description 34
- 230000003287 optical effect Effects 0.000 title claims abstract description 31
- 238000003384 imaging method Methods 0.000 claims abstract description 60
- 238000004364 calculation method Methods 0.000 claims abstract description 19
- 238000006073 displacement reaction Methods 0.000 claims description 16
- 238000005070 sampling Methods 0.000 claims description 10
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 241000238097 Callinectes sapidus Species 0.000 claims description 3
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 claims description 3
- 229910052802 copper Inorganic materials 0.000 claims description 3
- 239000010949 copper Substances 0.000 claims description 3
- 230000004048 modification Effects 0.000 claims description 3
- 238000012986 modification Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000000875 corresponding effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4023—Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a distortion correction method based on an RC optical system, which accurately positions n of a focal plane of a detector, which is uniformly distributed and is n by using a high-precision six-degree-of-freedom platform and a central algorithm of an imaging mass center2And accurately measuring the field angle of the camera, calculating the instantaneous field angles of the arc direction and the meridian direction of the camera, calculating the corresponding angle of each target point, rotating the high-precision six-degree-of-freedom platform, collecting images, calculating the imaging mass center of a star point target image, and calculating the camera interior parameters. And calculating by combining the calculated parameters of the inner side and the outer side of the camera to obtain the accurate focal plane geometric position deviation of the target point. Then, a correction model is established by using the target point for popularization, and geometric position correction is rapidly carried out on each pixel, so that the geometric position is greatly reducedThe calculation amount is reduced, and the high correction precision is kept.
Description
Technical Field
The invention relates to a distortion correction method based on an RC optical system.
Background
Distortion correction in the traditional method is realized by shooting distortion images of a target, a dot matrix card, a checkerboard test card and the like, calibrating the distortion images and establishing a geometric position deviation model. The common target is made of different patterns such as dot matrix, row and column grating, but cannot be applied to distortion correction of the long-focus lens camera.
The RC optical system has excellent imaging quality and simple structure, is widely applied to large optical systems and space optical systems, can obtain longer focal length under the condition of limited envelope size, and simultaneously, the aspheric surface of the RC optical system automatically corrects aberrations such as spherical aberration. Although the RC optical system has a small field of view, the geometric distortion still has a serious influence on the imaging quality due to the high resolution of the system, and the requirement on the distortion correction accuracy is very strict.
Disclosure of Invention
The invention aims to provide a distortion correction method based on an RC optical system, which can be used for quickly correcting the geometric position of each pixel, greatly reducing the calculated amount and keeping high correction precision.
The technical scheme for realizing the purpose of the invention is as follows: a distortion correction method based on an RC optical system comprises the following steps:
s1, constructing an image acquisition system, which comprises a uniform light source integrating sphere, a star point target, a collimator, a high-precision six-degree-of-freedom displacement platform and a camera;
s2, collecting images and calculating camera inside parameters; the camera inside parameters comprise principal point coordinates, principal distances, theoretical imaging distances of all target points from the focal plane center point, actual imaging distances of all target points from the focal plane center point and relative distortion; the actual imaging distance between each target point and the focal plane center point is the distance between the imaging centroid of each target point and the reference imaging centroid of the target point;
and S3, interpolating the target point obtained in the second step to obtain a distortion correction model, and popularizing the correction model to each pixel for correcting the geometric position.
The method for constructing the image acquisition system in the step S1 includes: firstly, a camera is arranged on a high-precision six-degree-of-freedom displacement platform; and then, arranging the star point target at the focal plane of the collimator, providing a light source by using a uniform light source integrating sphere to enable the target at the focal plane of the collimator to be simulated into an object image of a point at infinity, and projecting the star point target image onto the focal plane of the camera through an optical system.
The star point target is a thin copper sheet with a hole, the selection of the star point target is obtained by designing a theoretical focal length, a collimator focal length and a pixel interval, and the size of the projection of one pixel on an object plane is calculated according to the size of the hole in the star point target.
The step S2 specifically includes the following steps:
s2.1, rotating the high-precision six-degree-of-freedom displacement platform, scanning the image of the star point target through the whole field of view, and recording the initial sagittal angle α of the rotation of the high-precision six-degree-of-freedom displacement platform1And a cut-off angle α2Initial meridional angle α3And a cut-off angle α4Obtaining the field angle FOV:
FOVsagittal of arc=|α1-α2|
FOVIn the meridian direction=|α3-α4|
S2.2, calculating an instantaneous field angle IFOV by using the field angle FOV and the pixel number pixels of the camera detector:
wherein, pixels1 and 2 are respectively the pixel numbers of the camera detector in the arc sagittal direction and the meridian direction of the camera detector;
s2.3, selecting n2The target points are uniformly distributed on the whole camera detector and are marked as target points (x)i,yj) The unit is the number of pixels, and the values of i and j are 1 to n and are independent of each other;
s2.4, collecting images of the star point targets at all angles, and calculating the actual imaging mass center of each target point by using a central algorithm;
and S2.5, calculating an interior parameter of the optical system by using the calculated imaging mass center and the theoretical rotation angle.
The step S2.4 of calculating the actual imaging centroid of each target point by using the central algorithm specifically includes: recording the maximum gray value of the scattered spot of the star point target imaging and the abscissa pixel number X of the 8 surrounding pointsiNumber of pixels Y on ordinateiNumerical value DNiSolving the imaging mass center of each target point, wherein the calculation formula is as follows:
wherein,andis the target point (x)i,yj) The unit of the actual imaging mass center is the number of pixels, and the values of i and j are 1 to n and are independent of each other;
the calculation formulas of the principal point coordinates and the principal distance in the step S2.5 are as follows:
wherein n is2Number of target points, xiAnd yiRespectively is the horizontal and vertical coordinates of the imaging mass center of each target point on the image surface relative to the central point, w and v are xiAnd yiCorresponding toThe rotation angle, -v represents the conversion between the high-precision six-degree-of-freedom displacement platform coordinate system and the camera focal plane coordinate system in the calculation process, fxAnd fyPrincipal distances in the sagittal and meridional directions, respectively, f is the principal distance of the entire camera, x0And y0Are camera principal point coordinates.
The calculation formula of the theoretical imaging distance from each target point to the focal plane center point in the step S2.5 is as follows:
hx=f·tanw+x0
hy=f·tan(-v)+y0
wherein h isxIs the transverse theoretical imaging distance h of the target point from the focal plane center pointyIs the longitudinal theoretical imaging distance of the target point from the central point of the focal plane, h is the theoretical imaging distance of the target point from the central point of the focal plane, wxIs a transverse rotation angle, vxIs a longitudinal rotation angle, x0And y0Are camera principal point coordinates.
The calculation formula of the actual imaging distance and the relative distortion delta h of each target point from the focal plane central point in the step S2.5 is as follows:
wherein R isrealThe actual imaging distance of each target point from the focal plane center point is shown, Δ h is the relative distortion, xiAnd yiRespectively is the horizontal and vertical coordinates, w, of the imaging mass center of each target point at the relative center point of the image surface0Is a main point transverse rotation angle v0Is the longitudinal rotation angle of the principal point.
The step S3 specifically includes: distortion correction is carried out on the image by adopting a two-dimensional Lagrange interpolation method; for non-grid points (x, y), the function value can be approximated by a Lagrange interpolation function:
obtained after modification of the above formula:
in the formula xijThe actual x coordinate of the ith row and jth column sampling point on the image surface; y isijThe actual y coordinate of the ith row and jth column sampling point on the image surface; xij,YijThe x and y coordinates of the theoretical position of the ith row and jth column sampling point on the image surface; (x, y) is the actual coordinates of the image point before distortion correction; [ U, V ]]Is the corrected theoretical position coordinate of the point (x, y) calculated by interpolation;
calculating the theoretical position of the pixel from the actual position of the pixel by interpolation by using the formula, realizing distortion correction near a target point of an image surface, namely obtaining a target point distortion correction model, popularizing and applying the model to the geometric positions of all pixels of a correction detector, and finishing the primary correction of distortion;
after the preliminary correction is completed, pixel points with the gray value of 0 appear on the image, and bilinear interpolation is needed for further correction.
The formula for further correction using bilinear interpolation is:
wherein the red point Q11,Q12,Q21,Q22Known gray values of 4 pixels;
the further correction using bilinear interpolation specifically includes:
the first step is as follows: linear interpolation in the x-direction, where the red point Q11,Q12,Q21,Q22Are 4 pixels with known gray values. At Q12,Q22Calculating the interpolated blue point R2Gray value of (2), similarly to Q11, Q21By inserting blue dots R1To obtain R1And R2;
The second step is that: linear interpolation in the y-direction, R calculated by the first step1And R2Interpolating in the y direction to calculate the gray value of the P point;
the result of the bilinear interpolation is independent of the interpolation sequence, the interpolation in the y direction is firstly carried out, then the interpolation in the x direction is carried out, and the obtained result is the same as the result obtained in the step.
By adopting the technical scheme, the invention has the following beneficial effects: (1) the invention greatly reduces the distortion of the full view field and improves the correction precision, the distortion of the central view field is corrected to 0.01 percent from 0.2 percent before correction, and the distortion of the edge view field is corrected to 0.05 percent from 0.5 percent before correction.
(2) Compared with the traditional solution, the image acquisition system built by the invention has the advantages of simple operation, lower cost and higher measurement precision, provides reliable data and a faster measurement scheme for the distortion correction of a high-precision optical system, realizes the optimization of the distortion correction device and the distortion correction method of the small-field high-resolution optical system, and has wide application prospect in the field of commercial remote sensing.
(3) The image acquisition part of the invention needs to take a plurality of factors into consideration, including camera internal parameters: camera principal point coordinates, principal distance, actual focal length of an optical system and camera size; and the external parameters: the aperture of the star point target, the focal length of the collimator and the like are combined with Lagrange interpolation and the geometric position of a target point to construct a correction model, so that the distortion correction precision is improved, and the calculation is simplified.
Drawings
In order that the present disclosure may be more readily and clearly understood, reference is now made to the following detailed description of the present disclosure taken in conjunction with the accompanying drawings, in which
FIG. 1 is a flow chart of a method of the present invention.
Fig. 2 is a schematic diagram of an image coordinate system.
Fig. 3 is a schematic view of a focal plane coordinate system.
Fig. 4 is a schematic diagram of an ideal image plane coordinate system.
FIG. 5 is a diagram of bilinear interpolation.
Fig. 6 is a partial image before correction.
FIG. 7 is a partial image without correction.
Detailed Description
(example 1)
The distortion correction method based on the RC optical system of the embodiment shown in fig. 1 includes the following steps:
and S1, constructing an image acquisition system, which comprises a uniform light source integrating sphere, a star point target, a collimator, a high-precision six-degree-of-freedom displacement platform and an area array CDD camera.
The construction method of the image acquisition system comprises the following steps: firstly, mounting an area array CDD camera on a high-precision six-freedom-degree displacement platform; and then, arranging the star point target at the focal plane of the collimator, providing a light source by using a uniform light source integrating sphere to enable the target at the focal plane of the collimator to be simulated into an object image of a point at infinity, and projecting the star point target image onto the focal plane of the camera through an optical system.
The star point target is a thin copper sheet with holes, the selection of the star point target is calculated by designing a theoretical focal length, a collimator focal length and a pixel interval, and the size of the projection of one pixel on an object plane is calculated according to the size of the holes on the star point target.
S2, collecting images and calculating camera inside parameters; the internal parameters of the area array CDD camera comprise principal point coordinates, principal distances, theoretical imaging distances of all target points from the focal plane central point, actual imaging distances of all target points from the focal plane central point and relative distortion; and the actual imaging distance of each target point from the focal plane center point is the distance between the imaging centroid of each target point and the reference imaging centroid of the target point. The method specifically comprises the following steps:
s2.1, rotating the high-precision six-degree-of-freedom displacement platform, scanning the image of the star point target through the whole field of view, and recording the initial sagittal angle α of the rotation of the high-precision six-degree-of-freedom displacement platform1And a cut-off angle α2Initial meridional angle α3And a cut-off angle α4Obtaining the field angle FOV:
s2.2, calculating an instantaneous field angle IFOV by using the field angle FOV and the pixel number pixels of the camera detector:
wherein, pixels1 and 2 are respectively the pixel numbers of the camera detector in the arc sagittal direction and the meridian direction of the camera detector;
s2.3, selecting n2The target points are uniformly distributed on the whole camera detector (the correction precision is improved along with the increase of n, n is larger than or equal to 3), and are recorded as target points (xi,yj) The unit is the number of pixels, and the values of i and j are 1 to n and are independent of each other; and calculating a PI rotation angle by using the IFOV, wherein the meridional PI rotation angle is the product of the meridional IFOV and the difference value of the meridional direction and the central pixel of each target point, the unit is the pixel number, and the sagittal direction is the same.
S2.4, collecting images of the star point targets at all angles, and calculating the actual imaging mass center of each target point by using a central algorithm; the method specifically comprises the following steps: recording the maximum gray value of the scattered spot of the star point target imaging and the abscissa pixel number X of the 8 surrounding pointsiNumber of pixels Y on ordinateiNumerical value DNiSolving the composition of each target pointLike the centroid, the calculation formula is:
wherein,andis the target point (x)i,yj) The unit of the actual imaging centroid is the number of pixels, and the values of i and j are 1 to n and are independent of each other.
S2.5, calculating an interior parameter of the optical system by using the calculated imaging mass center and the theoretical rotation angle, wherein:
the calculation formula of the principal point coordinates and the principal distance is as follows:
wherein n is2Number of target points, xiAnd yiThe imaging mass centers of all the target points are opposite to the center point on the image surfaceWith the horizontal and vertical axes of (a) and (b), w and v being equal to xiAnd yiCorresponding rotation angle, -v represents the conversion between the high-precision six-degree-of-freedom displacement platform coordinate system and the camera focal plane coordinate system to be carried out in the calculation process, fxAnd fyPrincipal distances in the sagittal and meridional directions, respectively, f is the principal distance of the entire camera, x0And y0Are camera principal point coordinates.
The calculation formula of the theoretical imaging distance between each target point and the focal plane center point is as follows:
hx=f·tanw+x0
hy=f·tan(-v)+y0
wherein h isxIs the transverse theoretical imaging distance h of the target point from the focal plane center pointyIs the longitudinal theoretical imaging distance of the target point from the central point of the focal plane, h is the theoretical imaging distance of the target point from the central point of the focal plane, wxIs a transverse rotation angle, vxIs a longitudinal rotation angle, x0And y0Are camera principal point coordinates.
The calculation formula of the actual imaging distance and the relative distortion delta h of each target point from the focal plane center point is as follows:
wherein R isrealThe actual imaging distance of each target point from the focal plane center point is shown, Δ h is the relative distortion, xiAnd yiRespectively is the horizontal and vertical coordinates, w, of the imaging mass center of each target point at the relative center point of the image surface0Is a main point transverse rotation angle v0Is the longitudinal rotation angle of the principal point.
S3, interpolating the target point obtained in the second step to obtain a distortion correction model, and popularizing the correction model to each pixel for correcting the geometric position, specifically:
image coordinate system O as shown in FIG. 21Origin O of-ij1At the center position of the pixel at the upper left corner of the image (the first coordinate at the upper left corner), a j coordinate axis is arranged downwards along the image column direction, and an i coordinate axis is arranged rightwards along the image row direction. The image coordinate system unit is the number of pixels (pixel), and the origin of coordinates is (0, 0).
Focal plane coordinate system O as shown in FIG. 32-xy is a planar coordinate system. The focal plane coordinate system is established on the basis of the image coordinate system and has the function of realizing the conversion from the pixel coordinates of the image coordinate system to the physical coordinates, and the focal plane coordinate system O2-x axis, O2Y is respectively associated with the image coordinate system O1-i、O1-j-parallel, focal plane coordinate system origin O2Is the center point of the four picture elements in the center of the camera detector, and the origin of the focal plane coordinate system corresponds to the geometric position at the picture elements of the image coordinate system ((pixels1/2, pixels 2/2)). The focal plane coordinate system has units of mm.
The transformation system from the image coordinate system to the focal plane coordinate system is as follows (d is the pixel size in mm):
the ideal image plane coordinate system is a virtual coordinate system mainly used as a reference for distortion correction as shown in fig. 4, and the ideal image plane coordinate system O3The origin of-u 'v' is the principal point O3The image plane coordinate system is perpendicular to the principal distance. Ideal image plane coordinate system O3-u’,O3V' respectively with the focal plane coordinate system O2-x、O2-y is parallel and in the same direction. O is3Is the ideal image plane center. O is3At O2-coordinates in xy of (x)0,y0) With u' ═ x-x0, v’=y-y0。
In order to overcome the dependence of the traditional method on a radial distortion model, distortion correction is carried out on an image by adopting a two-dimensional Lagrange interpolation method. A series of target points are taken on a two-dimensional image surface, the corresponding relation before and after correction is obtained through an experimental method, and then the target points are used for carrying out two-dimensional Lagrange interpolation on the pixels needing to be corrected, so that the corrected positions of the pixels are obtained. Since the interpolation is performed in two directions, the image plane distortion is not required to be symmetrical about the optical axis center.
From the theory of numerical calculation method, on a two-dimensional plane, if the coordinate x isi(i=1,2…n),yj(j ═ 1,2, … n) of mesh nodes (x)i,yj) The value of the function of (A) is ZijI.e. the target point (x)i,yj) And its gray value DNiThen, for non-grid points (x, y), the function value can be approximated by a Lagrange interpolation function:
the above formula is applied to distortion correction, a series of target points are taken on a two-dimensional image surface to form a grid, and the actual imaging mass center (x) of the target points is usedi,yj) As independent variable, the theoretical position [ X, Y ] of the target point]As function value, the theoretical position [ U, V ] of any point on the image surface can be obtained from the actual position (x, y)]:
The interpolation function requires that the values of the target point in the x and y directions are independent, namely the target point (x)i, yj) { i ═ 1,2 … n, j ═ 1,2, … n } forms a regular grid of n rows and n columns. However, in actual sampling, only the actual positions of the target points on the image surface can be guaranteed to form an approximately regular grid with n rows and n columns, and an ideal regular grid cannot be obtained, that is, the x and y coordinates of the target points are correlated and appear in pairs, and each target point isThe point corresponds to one (x, y) coordinate. Therefore, to use the Lagrange interpolation function, the above equation must be transformed according to the calculated camera interior parameters as follows:
in the formula xijThe actual x coordinate of the ith row and jth column sampling point on the image surface; y isijThe actual y coordinate of the ith row and jth column sampling point on the image surface; xij,Yij(pixel distance d is multiplied by the difference value of the number of pixels) is the x and y coordinates of the theoretical position of the ith row and jth column sampling point on the image surface (the rotation angle of the high-precision six-freedom-degree displacement platform obtained by calculation is multiplied by the principal distance); (x, y) is the actual coordinates of the image point before distortion correction; [ U, V ]]Is the corrected theoretical position coordinate of the point (x, y) calculated by interpolation;
by utilizing the formula, the theoretical position of the pixel can be calculated from the actual position of the pixel through interpolation, distortion correction near a target point of an image surface is realized, a target point distortion correction model can be obtained, and the model is popularized and applied to the correction of the geometric positions of all pixels of the detector, so that the primary correction of distortion is completed.
After the preliminary correction is completed, pixel points with the gray value of 0 appear on the image, and bilinear interpolation is needed for further correction. The principle of bilinear interpolation is shown in FIG. 5.
The formula for further correction using bilinear interpolation is:
wherein the red point Q11,Q12,Q21,Q22Known gray values of 4 pixels;
the further correction using bilinear interpolation specifically includes:
the first step is as follows: linear interpolation in the x-direction, where the red point Q11,Q12,Q21,Q22Are 4 pixels with known gray values. At Q12,Q22Calculating the interpolated blue point R2Gray value of (2), similarly to Q11, Q21By inserting blue dots R1To obtain R1And R2;
The second step is that: linear interpolation in the y-direction, R calculated by the first step1And R2Interpolating in the y direction to calculate the gray value of the P point;
the result of the bilinear interpolation is the same as the result obtained in the above step, regardless of the order of interpolation, by performing the interpolation in the y direction first and then performing the interpolation in the x direction.
The pre-and post-correction effects are shown in fig. 6 and 7. The gray value of the edge pixel is 0, which is caused by the change of the geometric position of the pixel in the correction process, but the pixel is at the edge of the detector, so that the bilinear interpolation cannot be used for correction.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (9)
1. A distortion correction method based on an RC optical system is characterized in that: the method comprises the following steps:
1) an image acquisition system is set up, and the system comprises a uniform light source integrating sphere, a star point target, a collimator, a high-precision six-degree-of-freedom displacement platform and a camera; firstly, a camera is arranged on a high-precision six-degree-of-freedom displacement platform; then, the star point target is arranged at the focal plane of the collimator, a uniform light source integrating sphere is used for providing a light source to enable the target at the focal plane of the collimator to be simulated into an object image of a point at infinity, and the star point target image is projected onto the focal plane of the camera through an optical system;
2) acquiring an image and calculating camera interior parameters; the camera inside parameters comprise principal point coordinates, principal distances, theoretical imaging distances of all target points from the focal plane center point, actual imaging distances of all target points from the focal plane center point and relative distortion; the actual imaging distance between each target point and the focal plane center point is the distance between the imaging centroid of each target point and the reference imaging centroid of the target point;
3) interpolating the target point obtained in the step 2) to obtain a distortion correction model, and popularizing the correction model to each pixel for correcting the geometric position.
2. The distortion correction method based on the RC optical system as set forth in claim 1, wherein: the star point target is a thin copper sheet with a hole, the selection of the star point target is obtained by designing a theoretical focal length, a collimator focal length and a pixel interval, and the size of the projection of one pixel on an object plane is calculated according to the size of the hole in the star point target.
3. The distortion correction method based on the RC optical system as set forth in claim 1, wherein: the method for acquiring the image and calculating the camera intrinsic parameters in the step 2) specifically comprises the following steps:
1) rotating the high-precision six-degree-of-freedom displacement platform, scanning the image of the star point target through the whole field of view, and recording the initial sagittal angle α of the rotation of the high-precision six-degree-of-freedom displacement platform1And a cut-off angle α2Initial meridional angle α3And a cut-off angle α4Obtaining the field angle FOV:
FOVsagittal of arc=|α1-α2|
FOVIn the meridian direction=|α3-α4|
2) And calculating an instantaneous field angle IFOV by using the field angle FOV and the pixel number pixels of the camera detector:
wherein, pixels1 and 2 are the numbers of camera detector pixels in the radial direction and the meridional direction of the arc of the camera detector respectively;
3) selecting n2The target points are uniformly distributed on the whole camera detector and are marked as target points (x)i,yj) The unit is the number of pixels, and the values of i and j are 1 to n and are independent of each other;
4) collecting images of the star point targets at all angles, and calculating the actual imaging mass center of each target point by using a central algorithm;
5) and calculating an intrinsic parameter of the optical system by using the calculated imaging centroid and the theoretical rotation angle.
4. The distortion correction method based on the RC optical system as set forth in claim 3, wherein: the method for calculating the actual imaging centroid of each target point by using the central algorithm in the step 4) specifically comprises the following steps: recording the maximum gray value of the scattered spot of the star point target imaging and the abscissa pixel number X of the 8 surrounding pointsiNumber of pixels Y on ordinateiNumerical value DNiSolving the imaging mass center of each target point, wherein the calculation formula is as follows:
5. The distortion correction method based on the RC optical system as set forth in claim 3, wherein: the calculation formulas of the principal point coordinates and the principal distance of the interior parameters in the step 5) are as follows:
wherein n is2Number of target points, xiAnd yiRespectively is the horizontal and vertical coordinates of the imaging mass center of each target point at the relative center point of the image surface, w and v are xiAnd yiCorresponding rotation angle, -v represents the conversion between the high-precision six-degree-of-freedom displacement platform coordinate system and the camera focal plane coordinate system to be carried out in the calculation process, fxAnd fyPrincipal distances in the sagittal and meridional directions, respectively, f is the principal distance of the entire camera, x0And y0Are camera principal point coordinates.
6. The distortion correction method based on the RC optical system as set forth in claim 3, wherein: the calculation formula of the theoretical imaging distance between each target point in the interior parameters and the focal plane center point in the step 5) is as follows:
hx=f·tanw+x0
hy=f·tan(-v)+y0
wherein h isxIs the transverse theoretical imaging distance h of the target point from the focal plane center pointyIs the longitudinal theoretical imaging distance of the target point from the focal plane center point, h is the theoretical imaging distance of the target point from the focal plane center point, wxIs a target point transverse rotation angle vxIs a longitudinal rotation angle of a target point, x0And y0Are camera principal point coordinates.
7. The distortion correction method based on the RC optical system as set forth in claim 3, wherein: the calculation formula of the actual imaging distance and the relative distortion delta h between each target point in the interior parameters in the step 5) and the focal plane center point is as follows:
wherein R isrealThe actual imaging distance of each target point from the focal plane center point is shown, Δ h is the relative distortion, xiAnd yiRespectively is the horizontal and vertical coordinates, w, of the imaging mass center of each target point at the relative center point of the image surface0Is a principal point of transverse rotation angle, v0Is the longitudinal rotation angle of the principal point.
8. The distortion correction method based on the RC optical system as set forth in claim 1, wherein: the distortion correction method in the step 3) specifically comprises the following steps: distortion correction is carried out on the image by adopting a two-dimensional Lagrange interpolation method; for non-grid points (x, y), the function value can be approximated by a Lagrange interpolation function:
obtained after modification of the above formula:
in the formula xijThe actual x coordinate of the ith row and jth column sampling point on the image surface; y isijThe actual y coordinate of the ith row and jth column sampling point on the image surface; xij,YijThe x and y coordinates of the theoretical position of the ith row and jth column sampling point on the image surface; (x, y) is the actual coordinates of the image point before distortion correction; [ U, V ]]Is the corrected theoretical position coordinate of the point (x, y) calculated by interpolation;
calculating the theoretical position of the pixel from the actual position of the pixel by interpolation by using the formula, realizing distortion correction near a target point of an image surface, namely obtaining a target point distortion correction model, popularizing and applying the model to correct the geometric positions of all pixels of the detector, and finishing the primary correction of distortion;
after the preliminary correction is completed, pixel points with the gray value of 0 appear on the image, and bilinear interpolation is needed for further correction.
9. The distortion correction method based on the RC optical system as set forth in claim 8, wherein: the formula for further correction using bilinear interpolation is:
wherein the red point Q11,Q12,Q21,Q22Known gray values of 4 pixels;
the further correction using bilinear interpolation specifically includes:
the first step is as follows: linear interpolation in the x-direction, where the red point Q11,Q12,Q21,Q22Are 4 pixels with known gray values. At Q12,Q22Calculating the interpolated blue point R2Gray value of (2), similarly to Q11,Q21By inserting blue dots R1To obtain R1And R2;
The second step is that: linear interpolation in the y-direction, R calculated by the first step1And R2And (5) interpolating in the y direction to calculate the gray value of the P point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910850304.XA CN110751601A (en) | 2019-09-10 | 2019-09-10 | Distortion correction method based on RC optical system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910850304.XA CN110751601A (en) | 2019-09-10 | 2019-09-10 | Distortion correction method based on RC optical system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110751601A true CN110751601A (en) | 2020-02-04 |
Family
ID=69276280
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910850304.XA Pending CN110751601A (en) | 2019-09-10 | 2019-09-10 | Distortion correction method based on RC optical system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110751601A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112435302A (en) * | 2020-12-09 | 2021-03-02 | 北京理工大学 | Long-distance large-view-field fisheye camera calibration method based on high-precision rotary table and collimator |
CN113160319A (en) * | 2021-04-16 | 2021-07-23 | 广东工业大学 | Pixel-sub-pixel self-feedback matching visual rapid edge finding and point searching method |
CN113487740A (en) * | 2021-05-21 | 2021-10-08 | 北京控制工程研究所 | Space target nanometer precision imaging positioning method |
CN113538609A (en) * | 2021-06-17 | 2021-10-22 | 中科超精(南京)科技有限公司 | Position correction system and method of portal image device |
CN117572637A (en) * | 2024-01-16 | 2024-02-20 | 长春理工大学 | DMD-based optical imaging system imaging error correction method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104215261A (en) * | 2014-08-26 | 2014-12-17 | 中国科学院长春光学精密机械与物理研究所 | Distortion calibrating method for large-field reflex free form surface space camera |
CN105758623A (en) * | 2016-04-05 | 2016-07-13 | 中国科学院西安光学精密机械研究所 | TDI-CCD-based large-caliber long-focal-length remote sensing camera distortion measuring device and method |
CN106767907A (en) * | 2016-11-29 | 2017-05-31 | 上海卫星工程研究所 | Optical camera geometry imaging model high-precision calibrating and apparatus for evaluating and method |
CN109255760A (en) * | 2018-08-13 | 2019-01-22 | 青岛海信医疗设备股份有限公司 | Distorted image correction method and device |
-
2019
- 2019-09-10 CN CN201910850304.XA patent/CN110751601A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104215261A (en) * | 2014-08-26 | 2014-12-17 | 中国科学院长春光学精密机械与物理研究所 | Distortion calibrating method for large-field reflex free form surface space camera |
CN105758623A (en) * | 2016-04-05 | 2016-07-13 | 中国科学院西安光学精密机械研究所 | TDI-CCD-based large-caliber long-focal-length remote sensing camera distortion measuring device and method |
CN106767907A (en) * | 2016-11-29 | 2017-05-31 | 上海卫星工程研究所 | Optical camera geometry imaging model high-precision calibrating and apparatus for evaluating and method |
CN109255760A (en) * | 2018-08-13 | 2019-01-22 | 青岛海信医疗设备股份有限公司 | Distorted image correction method and device |
Non-Patent Citations (2)
Title |
---|
XIAOYAN LI ET AL.: ""Improved distortion correction method and applications for large aperture infrared tracking cameras"" * |
王誉都等: ""大口径测绘相机内方位元素标定和畸变矫正方法"" * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112435302A (en) * | 2020-12-09 | 2021-03-02 | 北京理工大学 | Long-distance large-view-field fisheye camera calibration method based on high-precision rotary table and collimator |
CN112435302B (en) * | 2020-12-09 | 2024-05-31 | 北京理工大学 | Remote large-view-field fisheye camera calibration method based on high-precision turntable and parallel light pipes |
CN113160319A (en) * | 2021-04-16 | 2021-07-23 | 广东工业大学 | Pixel-sub-pixel self-feedback matching visual rapid edge finding and point searching method |
CN113487740A (en) * | 2021-05-21 | 2021-10-08 | 北京控制工程研究所 | Space target nanometer precision imaging positioning method |
CN113538609A (en) * | 2021-06-17 | 2021-10-22 | 中科超精(南京)科技有限公司 | Position correction system and method of portal image device |
CN117572637A (en) * | 2024-01-16 | 2024-02-20 | 长春理工大学 | DMD-based optical imaging system imaging error correction method |
CN117572637B (en) * | 2024-01-16 | 2024-03-29 | 长春理工大学 | DMD-based optical imaging system imaging error correction method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110751601A (en) | Distortion correction method based on RC optical system | |
CN105913439B (en) | A kind of large-field shooting machine scaling method based on laser tracker | |
CN107014312B (en) | A kind of integral calibrating method of mirror-vibrating line laser structured light three-dimension measuring system | |
CN108489395B (en) | Vision measurement system structural parameters calibration and affine coordinate system construction method and system | |
CN109859272B (en) | Automatic focusing binocular camera calibration method and device | |
US7479982B2 (en) | Device and method of measuring data for calibration, program for measuring data for calibration, program recording medium readable with computer, and image data processing device | |
CN102326380B (en) | There are image sensor apparatus and the method for the efficient lens distortion calibration function of row buffer | |
CN111536902A (en) | Galvanometer scanning system calibration method based on double checkerboards | |
CN110345921B (en) | Stereo visual field vision measurement and vertical axis aberration and axial aberration correction method and system | |
CN113175899B (en) | Camera and galvanometer combined three-dimensional imaging model of variable sight line system and calibration method thereof | |
CN111486864B (en) | Multi-source sensor combined calibration method based on three-dimensional regular octagon structure | |
CN110099267A (en) | Trapezoidal correcting system, method and projector | |
CN112229323B (en) | Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method | |
CN112700502B (en) | Binocular camera system and binocular camera space calibration method | |
CN108305233A (en) | A kind of light field image bearing calibration for microlens array error | |
CN111009014A (en) | Calibration method of orthogonal spectral imaging pose sensor of general imaging model | |
CN109118525A (en) | A kind of dual-band infrared image airspace method for registering | |
CN105023281B (en) | Asterism based on point spread function wavefront modification is as centroid computing method | |
CN110490941B (en) | Telecentric lens external parameter calibration method based on normal vector | |
CN112489141B (en) | Production line calibration method and device for single-board single-image strip relay lens of vehicle-mounted camera | |
CN113362399B (en) | Calibration method for positions and postures of focusing mirror and screen in deflection measurement system | |
CN113870364A (en) | Self-adaptive binocular camera calibration method | |
CN113822949B (en) | Calibration method and device of binocular camera and readable storage medium | |
CN110689582B (en) | Total station camera calibration method | |
JP6560159B2 (en) | Position measuring device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200204 |
|
WD01 | Invention patent application deemed withdrawn after publication |