CN110933280B - Front-view steering method and steering system for plane oblique image - Google Patents
Front-view steering method and steering system for plane oblique image Download PDFInfo
- Publication number
- CN110933280B CN110933280B CN201911333318.0A CN201911333318A CN110933280B CN 110933280 B CN110933280 B CN 110933280B CN 201911333318 A CN201911333318 A CN 201911333318A CN 110933280 B CN110933280 B CN 110933280B
- Authority
- CN
- China
- Prior art keywords
- image
- axis
- oblique
- plane
- squint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/55—Optical parts specially adapted for electronic image sensors; Mounting thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/64—Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
A plane tilt image front view steering method and a steering system relate to the field of photoelectric technology and digital image processing; the device comprises an object plane, a lens, a photoelectric sensor and a processor, and solves the problems that in the existing inclined image correction process, an image is distorted, and a front view image cannot be obtained through rotation; the light of the object plane passes through the lens and forms an image on the photoelectric sensor; the photoelectric conversion area of the photoelectric sensor on the image surface is rectangular, and an image emitted by the image surface is converted into an image with a rectangular edge; the oblique view comprises an orthoscopic image and a lateralizing image, and the processor corrects the received orthoscopic image in two directions of a u axis and a v axis to obtain an orthoscopic image; and rotating the side oblique image, rotating the side oblique image to the same direction as the normal oblique image, and then correcting to obtain the normal view image. The rotation angle required can be obtained by calculating the skyline or the squint vertex of an arbitrary oblique shooting image which is to be corrected into a front view by using the common vertex characteristic of parallel lines in the squint image.
Description
Technical Field
The invention relates to the field of photoelectric technology and digital image processing, in particular to a plane oblique image front-view steering method and a steering system.
Background
In aerial photography images, satellite images and security monitoring images, the visual angle of a large area is inclined, and the posture and the shooting angle of a camera cannot be controlled randomly, so that the image inclination distortion is caused; because the actual mapping uses an orthographic image, the oblique picture needs to be rotationally corrected; the existing rotation correction method is that a central axis vertical marking line is arranged on an oblique object plane, and the included angle between a marking line image and a coordinate axis in a photo is the angle required for correction; however, for any image which is not marked, such as an aerial image, a satellite image and a security monitoring image under any condition, the rotation of the image is not known without the mark; the shooting of the photos in military reconnaissance or space navigation is not easy originally, and the shooting is difficult to realize if the shooting angle is additionally specified.
Disclosure of Invention
The invention provides a plane oblique image front-view steering method and a steering system, aiming at solving the problems that in the existing oblique image correction process, an image is distorted, and a front-view image cannot be obtained through rotation.
The method for turning the front view of the plane inclined image is realized by the following steps:
acquiring an oblique view image through a photoelectric sensor and receiving the oblique view image by an image processor, wherein the oblique view image comprises a forward oblique image and a side oblique image; setting the central point of a rectangular photoelectric conversion area of the photoelectric sensor as an image center, and setting a straight line passing through the image center along the horizontal scanning direction of the photoelectric sensor as a u-axis; a straight line passing through the image center along the vertical scanning direction of the photoelectric sensor is a v-axis; a u axis and a v axis on an image surface form a uv plane rectangular coordinate system;
correcting the orthorhombic image in the two directions of the u axis and the v axis by the image processor to obtain an orthoscopic image; rotating the oblique image, turning the oblique image to the same direction as the oblique image, and correcting to obtain an orthographic image;
the method for converting the side oblique image into the positive oblique image is realized by any one of the following modes:
the first method comprises the following steps: in the oblique image, a set of parallel lines L on the object plane is set1L2Obtaining a squint peak Q at the image planeL(ii) a Setting another set of parallel lines N on the object plane1N2Obtaining a squint peak Q at the image planeN(ii) a By straight-line connection QLAnd QNObtaining a skyline; the skyline is parallel to the x-axis and non-parallel to the u-axis; rotating the oblique image to enable the skyline to be parallel to the u axis, and converting the oblique image into a normal oblique image;
and the second method comprises the following steps: finding a set of parallel lines L in the laterals image1L2Calculating parallel lines L in the image plane uv coordinate system1L2Squint vertex Q at image planeLObtaining the radius R OF the skyline circle according to the known visual angle theta and the known image distance OF; through the squint vertex QLMaking two tangent lines to the skyline circle, and determining an effective skyline according to whether the direction angle of the parallel line is greater than 90 degrees; rotating a laterally oblique image so that the skyline is parallel to the u-axis, and converting the laterally oblique image into a forward oblique image;
and the third is that: setting a set of parallel lines L in a lateralized image1L2Calculating L in the image plane uv coordinate system1L2The squint vertex coordinates of (U, V); according to a known viewing angle thetaAnd an image distance OF, calculating a rotation angle required for converting the oblique image into the oblique imageRepresented by the formula:
a planar oblique image front-view steering system, the system comprising an object plane, a lens, a photosensor and an image processor; the light of the object plane is imaged on the photoelectric sensor through the lens;
the photoelectric conversion area of the photoelectric sensor on the image surface is rectangular, and an image emitted by the image surface is converted into an image with a rectangular edge;
the point imaged in the image center in the object plane is the object center; a straight line between the optical center and the object center of the lens is a central axis, and light emitted by the object center passes through the optical center of the lens along the central axis to reach the image center;
when the object plane is obliquely imaged, the central axis is not the main optical axis of the lens; an included angle formed by the central axis and the perpendicular line of the object plane is a visual angle; the image shot when the visual angle is zero is a front view; when the visual angle is not zero, the shot image is an oblique view;
the oblique view comprises an orthooblique image and a lateraloblique image, and the processor corrects the received orthooblique image in two directions of a u axis and a v axis to obtain an orthoview image; and rotating the oblique image, turning the oblique image to the same direction as the oblique image, and correcting to obtain an orthographic image.
The invention has the beneficial effects that: according to the steering method, as for plane images shot at any inclination, as long as two groups of parallel lines on an object plane are found, the angle required to be rotated in the front view correction process can be calculated; the method does not need to know the physical performance and the operating parameters of the optical system, and plane pictures taken by any equipment or any mode can be processed; if only one group of parallel lines can be found in the image, the angle required for correction can be calculated by only increasing two parameters of the visual angle and the image distance in the optical system.
The method can obtain the angle of the inclined image which needs to be rotated when being corrected by calculating the skyline according to the common parallel scenery or rectangular object in the aerial or security image; the oblique images can be used for obtaining views at other angles through regular trapezoid transformation or inverse trapezoid transformation only by using the method provided by the invention to carry out rotation correction.
Drawings
FIG. 1 is an optical block diagram of a flat oblique image elevation steering system according to the present invention;
FIG. 2 is a schematic diagram of the tilted object plane parallel lines in the front view turning method of the plane tilted image according to the present invention having a common vertex feature (tilted vertex) in the image plane imaging;
FIG. 3 is a schematic diagram of the relationship between the vertices of several triangles in the xz plane and the object plane π in FIG. 2;
FIG. 4 is an auxiliary schematic view of the front view turning method for plane tilt images in the process of calculating the squint vertex according to the present invention;
FIG. 5 is a schematic diagram of the rotation of an arbitrarily oblique-view captured image to a normal oblique-view image using skyline according to the present invention;
FIG. 6 is a schematic diagram of a method for finding a skyline by tangency between the skyline and a skyline circle in the front-view steering method for a plane tilt image according to the present invention;
FIG. 7 is a schematic diagram of a third method for turning a side oblique image into an oblique image in the method for turning a front oblique image according to the present invention.
In the figure: 1. an object plane, 2, a lens, 3, a photoelectric sensor, 4, an object plane vertical line, 5, a central axis, 6, an object center, 7 and an image center; x is the horizontal scanning direction of the photoelectric sensor; y is the vertical scanning direction of the photosensor.
Detailed Description
In a first embodiment, the present embodiment is described with reference to fig. 1 to 7, and a method for turning a planar oblique image in front view includes an imaging system, where the imaging system includes an object plane 1, a lens 2, a photosensor 3, and an image processor; the object plane 1 is a plane, is a plane where a shot object is located, and ignores the height of the shot object;
the lens 2 is an optical system consisting of a lens or a reflector, and light rays of an object plane pass through the lens 2 and form a real image on the photoelectric sensor 3; the image output by the photoelectric sensor 3 is sent to an image processor for CCD or CMOS.
The photoelectric sensor 3 is a photoelectric image conversion element such as a CCD or a CMOS, an image plane is a plane on which the photoelectric sensor 3 receives imaging light, and the photoelectric conversion area of the photoelectric sensor 3 on the image plane is rectangular, so that the imaging light emitted by the image plane is converted into an image with rectangular edges; the central point of the photoelectric conversion area rectangle of the photoelectric sensor 3 is called an image center, and a straight line passing through the image center along the horizontal scanning direction of the photoelectric sensor 3 is called a u-axis; a straight line passing through the image center along the vertical scanning direction of the photoelectric sensor 3 is a v-axis; a u axis and a v axis on an image surface form a uv plane rectangular coordinate system;
the point in the object plane 1 imaged at the image center is called the object center 6; a straight line is set between the optical center and the object center of the lens 2 and is called as a central axis, and light emitted by the object center passes through the optical center of the lens 2 along the central axis and then just reaches the image center; when the object plane is inclined well for imaging, the central axis is not the main optical axis of the convex lens; an included angle formed by the central axis and the perpendicular line of the object plane is a visual angle; the image shot when the visual angle is zero is a front view; and when the visual angle is not zero, the shot image is in an oblique view.
The squint images have distortion compared with the front-view images, and the squint images are divided into two cases of positive squint images and side squint images; the positive oblique image is an image shot when the object plane is parallel to the u axis or the v axis, and the lateral oblique image is an image shot when the object plane is not parallel to the two coordinate axes; the orthoscopic image can be corrected in u and v directions to obtain an orthoscopic image; however, the oblique image cannot be directly corrected, and the oblique image needs to be rotated first to turn the oblique image to the direction of the oblique image, and then the oblique image can be corrected to obtain the front view image.
In the present embodiment, the turning method for converting the oblique image into the oblique image is specifically as follows:
side oblique image shot in any side oblique object plane piBeta, a set of parallel lines L on the object plane Pi1L2Obtaining a squint vertex Q at βL(ii) a Then another group of parallel lines N on the object plane is found1N2Obtaining a squint vertex Q at βN(ii) a By straight-line connection QLAnd QNThe skyline Q can be obtainedLQN(ii) a The skyline is parallel to the x-axis but not parallel to the u-axis; the oblique image can be converted into a positive oblique image only by rotating the oblique image to enable the skyline to be parallel to the u axis; rotating the side oblique image to obtain a positive oblique image gamma, and enabling a skyline on the beta to be parallel to the horizontal direction of the image gamma to obtain a positive oblique image; the method does not need to know the physical parameters OF the optical system, can obtain the rotation angle under the condition OF unknown visual angle theta and image distance OF, and is suitable for any image; in the security image, parallel lines such as doors and windows, corridors, channels or marked lines used for calculating the skyline are easy to find; in an aerial image, rectangular objects on an object plane, such as vehicles, houses, roads and the like, are common, and two groups of parallel lines can be used for calculating skylines as long as one rectangle is found in the image.
In the embodiment described with reference to fig. 2 to 4, the calculation process of the squint vertex is as follows:
any group of parallel lines on the inclined object plane have intersection points in image formation on the image plane, which are called as squint vertexes, and the coordinates of the squint vertexes can be obtained by calculation by using a linear equation and a proportional relation similar to a triangle;
in fig. 2, an oblique object plane pi is imaged on an image plane β, an angle θ is formed between the object plane and the image plane, an angle α is formed between a set OF parallel lines L1 and L2 on the object plane and an x axis, BC and AD are imaged on the image plane, intersect at the squint vertex Q, and have coordinates (OF × cot (a)/sin (θ), OF × cot (θ)) and PQ is the skyline;
B1and C1Are two points on L1, A1And D1Are two points on L2; optical center is F, image plane is beta0(ii) a For ease of calculation, the image plane β is analyzed0The image plane beta is in a symmetrical relation with the optical center F, and each pixel on the image plane beta is positioned on the image plane beta with the optical center F as a symmetrical center0The above step (1); straight line BC and straight line AD respectivelyThe real images of the straight lines L1 and L2 on the object plane on the image plane;
since the plane FBC intersects the object plane at a straight line L1 and intersects the image plane at BC, the projection of the straight line L1 on the image plane beta as seen from the optical center is a straight line BC, and B and C correspond to B on the object plane1And C1;
Since the plane FAD intersects the object plane at the line L2 and intersects the image plane at the line AD, the projection of the line L2 on the image plane β as seen from the optical center is the line AD, and the points A and D on the object plane correspond to the points A and D respectively1And D1;
The perpendicular FO from the optical center F to the image plane intersects the object plane at the point G; the straight line L3 crosses G point and is parallel to the image plane straight line CD, and intersects with the parallel lines L1 and L2 at C respectively1And D1Point; the straight line FP is parallel to the object plane and intersects the image plane at a point P, and if the included angle between the FP and the image plane is theta, the included angle between the object plane and the image plane is also equal to theta; specifying a straight line where the CD is located as an x-axis and a straight line where the AB is located as a y-axis; the straight line where FO is located is the z axis;
the equation of the straight line BC is;
the equation of the straight line AD is;
the two formulas are combined to obtain an equation set, and the vertical coordinate of the intersection point of the straight line BC and the straight line AD is obtained through solving;
finishing to obtain;
FIG. 3 shows the present embodimentIn the triangular FGC, the relationship between the object plane π and several triangular vertices in the xz plane in FIG. 21Triangular FGD1Triangular FOC and triangular FOD, because of GC1Parallel to CD, and straight line L1 parallel to straight line L2, so there is a proportional relationship;
FIG. 4 is a cross-line FB and a point K of an auxiliary line PK taken from a point P according to the present embodiment; a perpendicular line KI crossing the K point and making the z axis is crossed with a straight line FG in the I, and a straight line FA is crossed with the L; drawing a straight line KM parallel to PF through the point K, wherein an alternating straight line FA is at the point M, and an alternating straight line FG is at the point J; a vertical line MH crossing the point M to form a straight line IK is crossed with the straight line IK at a point H;
at triangular FGB1Triangular FGA1Triangle FJK and triangle FJM because KJ is parallel to GB1The straight line HM is parallel to the straight line IJ, so that a proportional relationship exists;
in triangle FOA, triangle FIL, triangle FOB and triangle FIK, IK is parallel to BO, so there is a proportional relationship;
because the two right-angled triangles FOP and JIK are congruent, and the triangle FOA is similar to the triangle MHL, there is a proportional relationship;
substituting the vertical coordinate equation of the intersection point of the straight line BC and the straight line AD;
therefore, an intersection point Q exists between the straight line BC and the straight line AD on the image surface, and the vertical coordinate of the point Q is substituted into the equation of the straight line AD
Crossing P point as z axis and intersecting image plane with A2Point; then A is2Dotted on a straight line GB1Above, therefore GA2FP; per A2Perpendicular to the xy plane at point FG extended line G2(ii) a Per A1Perpendicular to the xy plane at point FG extended line G1(ii) a Because of the triangular FOD, triangular FGD1Similarly, there is a proportional relationship;
if the angle between the parallel line and the positive direction of the x axis is called a direction angle, denoted by alpha, then a right triangle GA is formed1D1Performing the following steps;
GD1=A1G*cot(ɑ)
because of the triangle A1G1G is similar to the triangular POF, so the following relationship exists;
thus, a;
because of the triangle AOF and triangle A1G1F is similar, and angle GA1G1Equal to theta, in a right triangle A1G1G is in;
G1G=A1G1*tg(θ)
so that the compound is obtained;
substituting into the equation of the straight line AD for solving;
therefore, no matter how many distances are, a group OF flat lines on the oblique view object plane intersect at a point Q on the image plane, and the coordinate OF the point Q on the y axis is fixed to be equal to the cotangent OF the image distance OF multiplied by the included angle between the object plane and the image plane; the characteristic is called the common vertex characteristic of the parallel lines on the image plane, the intersection point Q is called the squint vertex of the group of parallel lines, and the coordinates are;
(OF*cot(ɑ)/sin(θ),OF*cot(θ))
the included angle between a parallel line and the x-axis direction of the image plane is alpha;
any one squint vertex has the following characteristics: the coordinate of the squint vertex on the y axis is only related to the visual angle theta and the image distance; determining the coordinate of the squint vertex on the x axis by the direction angle alpha, the view angle theta and the image distance of the parallel line; when the direction angle is equal to 90 degrees, the coordinate of the squint vertex on the x axis is zero; when the direction angle is less than 90 degrees, the coordinate of the squint vertex on the x axis is a positive value, and when the direction angle is greater than 90 degrees, the coordinate of the squint vertex on the x axis is a positive value.
In the embodiment, countless groups of parallel lines in any direction exist on the object plane, a plurality of groups of squint vertexes are correspondingly arranged on the image plane, and all the groups of squint vertexes are positioned on a straight line parallel to the x axis; the straight line is called the skyline, and the intersection point of the skyline and the y axis has the coordinates of (0, y)p);
yP=OF*cot(θ)
The skyline is a boundary line formed by shooting an image at an infinite distance on an object plane, and the image obtained by aerial photography or security monitoring shooting is only a limited area and does not shoot a scene at the infinite distance, so the skyline is often positioned outside an effective pixel area of the picture, and some virtual pixels positioned outside an actual area of the image need to be calculated in the using process of the skyline;
the skyline on the image surface is fixedly generated according to the physical properties OF an optical system and can be directly calculated by using the image distance OF and the view angle theta; in an optical imaging system, if the image distance OF and the viewing angle theta are fixed values, a skyline circle can be assumed to exist on an object plane, and the radius R OF the skyline circle can be calculated; the skyline circle takes the image center as the center of a circle, and the skyline is tangent to the skyline circle no matter how the scenery on the object plane is transmitted around the central axis.
The second embodiment will be described with reference to fig. 6, which is another example of the turning method for converting the oblique image into the oblique image: the method specifically comprises the following steps:
in a skew image beta shot by a skew object plane Pi, a group of parallel lines L on the object plane Pi1L2Obtaining a squint vertex Q at βL(ii) a Only one set of parallel lines L can be found in the laterals image1L2Then, the second group of parallel lines can not be found out, and whether the direction angle of the parallel lines is larger than 90 degrees or not is known, so that L can be calculated in the image plane uv coordinate system1L2Squint vertex Q at image planeL(ii) a If the physical parameters OF the optical system are known, then the radius R OF the skyline circle can be calculated according to the known visual angle theta and the known image distance OF; from squint vertex QLTwo tangent lines can be made to the skyline circle, wherein one tangent line is a valid skyline and the other tangent line is invalid; because the sign of the squint vertex on the x-axis coordinate is determined by the direction angle of the parallel line, the effective skyline can be selected only by the condition that whether the direction angle of the parallel line is more than 90 degrees; rotating the laterals image so that the skyline is parallel to the u-axis can transform this laterals image into a orthonormal image.
Third embodiment, the present embodiment is described with reference to fig. 7, and the present embodiment is another example of the turning method for converting the oblique image into the oblique image: the method specifically comprises the following steps:
only one set of parallel lines L can be found in the laterals image1L2And the second group of parallel lines can not be found, so that L can be calculated in the image surface uv coordinate system1L2The squint vertex E coordinates of (U, V); if the physical parameters OF the optical system are known, the angle OF rotation required for converting the side oblique image into the normal oblique image can be directly calculated according to the known visual angle theta and the known image distance OFThe angle EOS is psi can be calculated using an inverse cotangent function;
the straight line of the line segment EP is a skyline, and in an xy coordinate system, a P point coordinates y on a y axisPIs as follows;
yP=OF*cot(θ)
the angle PEO η can be calculated by an arcsine function;
Claims (6)
1. the orthographic view turning method of the plane oblique image is characterized by comprising the following steps: the method is realized by the following steps:
acquiring an oblique view image through a photoelectric sensor and receiving the oblique view image by an image processor, wherein the oblique view image comprises a forward oblique image and a side oblique image; setting the central point of a rectangular photoelectric conversion area of the photoelectric sensor (3) as an image center, and setting a straight line passing through the image center along the horizontal scanning direction of the photoelectric sensor (3) as a u-axis; a straight line passing through the image center along the vertical scanning direction of the photoelectric sensor (3) is a v-axis; a u axis and a v axis on an image surface form a uv plane rectangular coordinate system;
correcting the orthorhombic image in the two directions of the u axis and the v axis by the image processor to obtain an orthoscopic image; rotating the oblique image, turning the oblique image to the same direction as the oblique image, and correcting to obtain an orthographic image;
the method for converting the side oblique image into the positive oblique image is realized by any one of the following modes:
the first method comprises the following steps: in the oblique image, a set of parallel lines L on the object plane is set1L2Obtaining a squint peak Q at the image planeL(ii) a Setting another set of parallel lines N on the object plane1N2Obtaining a squint peak Q at the image planeN(ii) a By straight-line connection QLAnd QNObtaining a skyline; the skyline is parallel to the x-axis and non-parallel to the u-axis; rotating the oblique image to enable the skyline to be parallel to the u axis, and converting the oblique image into a normal oblique image;
the squint vertices are: a group OF flat lines on the oblique view object plane intersect at a point Q on the image plane, and the coordinate OF the point Q on the y axis is fixed to be equal to the image distance OF multiplied by the cotangent OF the included angle between the object plane and the image plane; this feature is called the common vertex feature of the parallel lines on the image plane, the point Q is called the squint vertex of the set of parallel lines, and the coordinates are:
(OF*cot(a)/sin(8),OF*cot(θ))
in the formula, theta is a visual angle, and alpha is an included angle between a parallel line and the x-axis direction of the image plane;
and the second method comprises the following steps: finding a set of parallel lines L in the laterals image1L2Calculating parallel lines L in the image plane uv coordinate system1L2Squint vertex Q at image planeLObtaining the radius R OF the skyline circle according to the known visual angle theta and the known image distance OF; through the squint vertex QLMaking two tangent lines to the skyline circle, and determining the effectiveness according to whether the direction angle of the parallel line is greater than 90 degreesA skyline; rotating a laterally oblique image so that the skyline is parallel to the u-axis, and converting the laterally oblique image into a forward oblique image;
and the third is that: setting a set of parallel lines L in a lateralized image1L2Calculating L in the image plane uv coordinate system1L2The squint vertex coordinates of (U, V); calculating the rotation angle required for converting the side oblique image into the normal oblique image according to the known visual angle theta and the image distance OFRepresented by the formula:
2. the method of claim 1, wherein: the coordinate OF the squint vertex on the y axis is related to the visual angle theta and the image distance OF, and the visual angle theta and the image distance OF determine the coordinate OF the squint vertex on the x axis;
setting an included angle between a parallel line and the x-axis direction of the image plane as alpha, and when the direction angle of the parallel line is equal to 90 degrees, setting the coordinate of the squint vertex on the x-axis as zero; when the direction angle of the parallel lines is less than 90 degrees, the coordinate of the squint vertex on the x axis is a positive value, and when the direction angle of the parallel lines is greater than 90 degrees, the coordinate of the squint vertex on the x axis is a negative value.
3. The method of claim 1, wherein: multiple groups of parallel lines in any direction on the object plane correspond to multiple groups of squint vertexes on the image plane, the multiple groups of squint vertexes are all positioned on the skyline, and the y-axis intersection point y of the skyline and the y-axisPCalculated using the formula:
yP=OF*cot(θ)
and when the image distance OF and the viewing angle theta are fixed values, setting that a skyline circle exists on the object plane, and obtaining the radius R OF the skyline circle by taking the image center as the center OF a circle.
4. The steering system of the plane-tilt image orthographic steering method according to claim 1, wherein: the system comprises an object plane (1), a lens (2), a photoelectric sensor (3) and an image processor; the light of the object plane (1) is imaged on the photoelectric sensor (3) through the lens (2);
the photoelectric conversion area of the photoelectric sensor (3) on the image surface is rectangular, and an image emitted by the image surface is converted into an image with rectangular edges;
the point imaged in the image center in the object plane (1) is the object center; a straight line between the optical center and the object center of the lens (2) is a central axis, and light emitted by the object center passes through the optical center of the lens (2) along the central axis to reach the image center;
when the object plane is obliquely imaged, the central axis is not the main optical axis of the lens (2); an included angle formed by the central axis and the perpendicular line (4) of the object plane is a visual angle; the image shot when the visual angle is zero is a front view; when the visual angle is not zero, the shot image is an oblique view;
the oblique view comprises an orthooblique image and a lateral oblique image, and the image processor corrects the received orthooblique image in two directions of a u axis and a v axis to obtain an orthoview image; and rotating the oblique image, turning the oblique image to the same direction as the oblique image, and correcting to obtain an orthographic image.
5. The steering system of the plane-tilt image orthographic steering method according to claim 4, wherein: the display device also comprises a display, and the image obtained by imaging on the photoelectric sensor (3) is subjected to steering processing by the image processing circuit (4), and then a forward oblique image is displayed on the display (5).
6. The steering system of the plane-tilt image orthographic steering method according to claim 4, wherein: the object plane (1) is a plane, the lens (2) is a convex lens or a concave mirror, and the photoelectric sensor (3) is a CCD or a CMOS.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911333318.0A CN110933280B (en) | 2019-12-23 | 2019-12-23 | Front-view steering method and steering system for plane oblique image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911333318.0A CN110933280B (en) | 2019-12-23 | 2019-12-23 | Front-view steering method and steering system for plane oblique image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110933280A CN110933280A (en) | 2020-03-27 |
CN110933280B true CN110933280B (en) | 2021-01-26 |
Family
ID=69861724
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911333318.0A Active CN110933280B (en) | 2019-12-23 | 2019-12-23 | Front-view steering method and steering system for plane oblique image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110933280B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101021947A (en) * | 2006-09-22 | 2007-08-22 | 东南大学 | Double-camera calibrating method in three-dimensional scanning system |
CN101685197A (en) * | 2008-09-24 | 2010-03-31 | 中国科学院自动化研究所 | Method for evaluating tangential distortion indexes of lens of camera |
CN103020354A (en) * | 2012-12-12 | 2013-04-03 | 哈尔滨飞羽科技有限公司 | Design method of spherical curtain projection system for region identification |
CN103197418A (en) * | 2012-01-10 | 2013-07-10 | 上海微电子装备有限公司 | Alignment 4 F optics system |
US10613037B2 (en) * | 2014-09-29 | 2020-04-07 | SCREEN Holdings Co., Ltd. | Inspection apparatus and inspection method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101625760A (en) * | 2009-07-28 | 2010-01-13 | 谭洪舟 | Method for correcting certificate image inclination |
CN102509093B (en) * | 2011-10-18 | 2014-01-29 | 广州市加信电子技术有限公司 | Close-range digital certificate information acquisition system |
CN105352591A (en) * | 2015-12-04 | 2016-02-24 | 东华大学 | Vibration characteristic test method of spinning spindle |
-
2019
- 2019-12-23 CN CN201911333318.0A patent/CN110933280B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101021947A (en) * | 2006-09-22 | 2007-08-22 | 东南大学 | Double-camera calibrating method in three-dimensional scanning system |
CN101685197A (en) * | 2008-09-24 | 2010-03-31 | 中国科学院自动化研究所 | Method for evaluating tangential distortion indexes of lens of camera |
CN103197418A (en) * | 2012-01-10 | 2013-07-10 | 上海微电子装备有限公司 | Alignment 4 F optics system |
CN103020354A (en) * | 2012-12-12 | 2013-04-03 | 哈尔滨飞羽科技有限公司 | Design method of spherical curtain projection system for region identification |
US10613037B2 (en) * | 2014-09-29 | 2020-04-07 | SCREEN Holdings Co., Ltd. | Inspection apparatus and inspection method |
Also Published As
Publication number | Publication date |
---|---|
CN110933280A (en) | 2020-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110211043B (en) | Registration method based on grid optimization for panoramic image stitching | |
US7379619B2 (en) | System and method for two-dimensional keystone correction for aerial imaging | |
WO2019100933A1 (en) | Method, device and system for three-dimensional measurement | |
CN106157304A (en) | A kind of Panoramagram montage method based on multiple cameras and system | |
CN106780618B (en) | Three-dimensional information acquisition method and device based on heterogeneous depth camera | |
US20170127045A1 (en) | Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof | |
US20090128686A1 (en) | Imaging apparatus | |
CN101697105B (en) | Camera type touch detection positioning method and camera type touch detection system | |
CN108805934A (en) | A kind of method for calibrating external parameters and device of vehicle-mounted vidicon | |
CN108629829B (en) | Three-dimensional modeling method and system of the one bulb curtain camera in conjunction with depth camera | |
CN109903227A (en) | Full-view image joining method based on camera geometry site | |
CN108629756B (en) | Kinectv2 depth image invalid point repairing method | |
CN106846409A (en) | The scaling method and device of fisheye camera | |
CN109345587B (en) | Hybrid vision positioning method based on panoramic vision and monocular vision | |
CN103295231A (en) | Method for geometrically correcting vertically mapped images of fisheye lenses in fisheye image mosaic | |
CN110033407A (en) | A kind of shield tunnel surface image scaling method, joining method and splicing system | |
CN113554708A (en) | Complete calibration method of linear structured light vision sensor based on single cylindrical target | |
CN105513074B (en) | A kind of scaling method of shuttlecock robot camera and vehicle body to world coordinate system | |
CN110933280B (en) | Front-view steering method and steering system for plane oblique image | |
CN104363421B (en) | The method and apparatus for realizing Multi-angle camera monitoring effect | |
CN110738696A (en) | Driving blind area perspective video generation method and driving blind area view perspective system | |
CN108205799B (en) | Image splicing method and device | |
CN109682312B (en) | Method and device for measuring length based on camera | |
Kweon et al. | Image-processing based panoramic camera employing single fisheye lens | |
EP3318059B1 (en) | Stereoscopic image capture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |