CN108876725A - A kind of virtual image distortion correction method and system - Google Patents
A kind of virtual image distortion correction method and system Download PDFInfo
- Publication number
- CN108876725A CN108876725A CN201710340695.1A CN201710340695A CN108876725A CN 108876725 A CN108876725 A CN 108876725A CN 201710340695 A CN201710340695 A CN 201710340695A CN 108876725 A CN108876725 A CN 108876725A
- Authority
- CN
- China
- Prior art keywords
- distortion
- virtual image
- image
- grid lines
- inverse
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012937 correction Methods 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000012545 processing Methods 0.000 claims abstract description 47
- 238000013507 mapping Methods 0.000 claims description 34
- 239000011159 matrix material Substances 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 18
- 238000005070 sampling Methods 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 10
- 239000006185 dispersion Substances 0.000 abstract description 5
- 239000011521 glass Substances 0.000 description 15
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000003190 augmentative effect Effects 0.000 description 4
- 239000011248 coating agent Substances 0.000 description 3
- 238000000576 coating method Methods 0.000 description 3
- 230000001681 protective effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000011478 gradient descent method Methods 0.000 description 2
- 230000006698 induction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Transforming Electric Information Into Light Information (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of virtual image distortion correction method and system.Wherein, this method comprises the following steps:Displaying target virtual image is exported, and projects destination virtual image, to form target projection image;Acquire target projection image;Distortion parameter is calculated according to target projection image;Anti- distortion processing is carried out to virtual image to be output using distortion parameter, with treated the virtual image of obtaining instead distorting;The anti-distortion of output display treated virtual image.The present invention obtains the target projection image of reality output, and obtains distortion parameter according to the target projection image, so that largely eliminating the distortion and dispersion of the virtual image of output after virtual image to be output carries out anti-distortion processing using distortion parameter.
Description
Technical Field
The invention relates to the technical field of virtual image processing, in particular to a virtual image distortion correction method and system.
Background
Virtual Reality (VR) generates an interactive three-dimensional environment for a user by comprehensively using a computer image system and various control devices, and gives the user an immersive feeling. When the VR is displayed, the user cannot see the external scene and only can see the virtual image.
Augmented Reality (AR) is a technology that increases the perception of a user to the real world through information provided by a computer system, applies virtual information to the real world, and superimposes virtual objects, scenes, and information generated by a computer into the real scene, thereby realizing the enhancement of reality. When the AR is displayed, the user can see both the real external scene and the virtual image.
However, when the virtual image is mapped to the real scene, the virtual image has distortion, so that the virtual object in the virtual image is deformed, and the sense of reality is lacked, thereby reducing the user experience.
Disclosure of Invention
The invention aims to provide a virtual image distortion correction method to solve the distortion problem of the existing virtual image. In addition, the invention also provides a virtual image distortion correction system for implementing the virtual image distortion correction method.
In order to solve the above problem, the present invention provides, as an embodiment, a virtual image distortion correction method including the steps of:
outputting and displaying a target virtual image, and projecting the target virtual image to form a target projection image;
obtaining a target projection image;
calculating distortion parameters according to the target projection image;
carrying out inverse distortion processing on the virtual image to be output by using the distortion parameters to obtain an inverse distortion processed virtual image;
and outputting and displaying the virtual image after the anti-distortion processing.
As a further improved embodiment of the present invention, the step of calculating a distortion parameter from the target projection image includes:
performing trapezoidal correction processing on the target projection image to obtain a trapezoidal corrected image and a homography matrix of the trapezoidal corrected image;
extracting the grid lines of the trapezoidal corrected image to obtain distorted grid lines;
acquiring the maximum inscribed rectangle of the trapezoidal corrected image, and extracting the grid lines of the maximum inscribed rectangle to obtain equally-divided grid lines;
and calculating to obtain distortion parameters according to the position corresponding relation between the distorted grid lines and the network lines of the equally divided grid lines.
As a further improved embodiment of the present invention, the step of calculating the distortion parameter according to the position corresponding relationship between the distorted grid lines and the network lines of the equally divided grid lines includes:
obtaining a first intersection A (X) in the distorted grid lines0,Y0) And acquiring a second intersection a1 (X) corresponding to the first intersection a among the bisected grid lines1,Y1);
The distortion parameter is calculated according to equation (1):
wherein,the first intersection point A and the second intersection point A1 are obtained through sampling, a first intersection point sequence of the distorted grid lines and a second intersection point sequence of the corresponding equally-divided grid lines are obtained through collection, sampling samples are substituted into a formula, and distortion parameters K1, K2 and K3 are obtained through solving.
As a further improved embodiment of the present invention, the step of performing inverse distortion processing on the virtual image to be output by using the distortion parameter to obtain an inverse distortion processed virtual image includes:
calculating first coordinate information after distortion mapping of each pixel in a virtual image to be output through a formula (1), and obtaining a first distortion-removed image according to all the first coordinate information;
and performing inverse transformation on the first undistorted image by using the homography matrix to obtain a virtual image after the inverse distortion processing.
As a further improved embodiment of the present invention, the step of calculating the distortion parameter according to the position corresponding relationship between the distorted grid lines and the network lines of the equally divided grid lines includes:
acquiring a first cross point sequence of the distorted grid lines and a second cross point sequence of the corresponding equally-divided grid lines;
and constructing a position mapping table according to the position relation of the first cross point sequence and the second cross point sequence, wherein the position mapping table is a distortion parameter.
As a further improved embodiment of the present invention, the step of performing inverse distortion processing on the virtual image to be output by using the distortion parameter to obtain an inverse distortion processed virtual image includes:
obtaining second coordinate information after distortion mapping of each pixel in the virtual image to be output according to the coordinate mapping table, and obtaining a second distortion-removed image according to all the second coordinate information;
and performing inverse transformation on the second undistorted image by using the homography matrix to obtain a virtual image after the inverse distortion processing.
In order to solve the above problem, the present invention also provides a virtual image distortion correction system, including:
the display screen is electrically connected with the processor and used for outputting and displaying the target virtual image and projecting the target virtual image;
the lens is arranged in the projection direction of the display screen and used for outputting and displaying a target projection image corresponding to the target virtual image;
the camera is electrically connected with the processor and used for acquiring a target projection image;
the processor is used for receiving the target projection image, calculating distortion parameters according to the target projection image, performing inverse distortion processing on the virtual image to be output by using the distortion parameters to obtain the virtual image after the inverse distortion processing, and transmitting the virtual image after the inverse distortion processing to the display screen for output and display.
As a further improved embodiment of the present invention, the processor includes:
the first trapezoidal correction module is used for carrying out trapezoidal correction processing on the target projection image to obtain a trapezoidal corrected image and a homography matrix of the trapezoidal corrected image;
the distorted grid line obtaining module is used for extracting the grid lines of the trapezoidal corrected image to obtain distorted grid lines;
the halved grid line acquisition module is used for acquiring the largest inscribed rectangle in the trapezoidal corrected image and extracting the grid lines of the largest inscribed rectangle to obtain halved grid lines;
and the distortion parameter calculation module is used for calculating to obtain the distortion parameter according to the position corresponding relation between the distorted grid lines and the network lines of the equally divided grid lines.
As a further improved embodiment of the present invention, the distortion parameter calculation module includes:
an acquisition unit for acquiring a first intersection A (X) in the distorted grid lines0,Y0) And acquiring a second intersection a1 (X) corresponding to the first intersection a among the bisected grid lines1,Y1);
A first distortion parameter calculation unit for calculating a distortion parameter according to formula (1):
wherein,the first intersection point A and the second intersection point A1 are obtained through sampling, a first intersection point sequence of the distorted grid lines and a second intersection point sequence of the corresponding equally-divided grid lines are obtained through collection, sampling samples are substituted into a formula, and distortion parameters K1, K2 and K3 are obtained through solving.
As a further improved embodiment of the present invention, the processor further includes:
the first distortion removal module is used for calculating first coordinate information after distortion mapping of each pixel in the virtual image to be output through a formula (1), and obtaining a first distortion removal image according to all the first coordinate information;
and the second trapezoidal correction module is used for performing inverse transformation on the first undistorted image by using the homography matrix to obtain a virtual image after inverse distortion processing.
As a further improved embodiment of the present invention, the distortion parameter calculation module includes:
the acquisition unit is used for acquiring a first cross point sequence of the distorted grid lines and a second cross point sequence of the corresponding equally-divided grid lines;
and the second distortion parameter calculation unit is used for constructing a position mapping table according to the position relation between the first cross point sequence and the second cross point sequence, and the position mapping table is a distortion parameter.
As a further improved embodiment of the present invention, the processor further includes:
the second distortion removal module is used for obtaining second coordinate information after distortion mapping of each pixel in the virtual image to be output according to the coordinate mapping table and obtaining a second distortion removal image according to all the second coordinate information;
and the third trapezoidal correction module is used for performing inverse transformation on the second undistorted image by using the homography matrix to obtain a virtual image after inverse distortion processing.
Compared with the prior art, the method and the device have the advantages that the actually output target projection image is obtained, and the distortion parameter is obtained according to the target projection image, so that the distortion and dispersion of the output virtual image are eliminated to a great extent after the virtual image to be output is subjected to anti-distortion processing by utilizing the distortion parameter.
Drawings
Fig. 1 is a flowchart illustrating a virtual image distortion correction method according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart illustrating a distortion parameter obtaining step in the virtual image distortion correction method according to an embodiment of the present invention.
Fig. 3 is a schematic view showing a process of processing a target projection image in the virtual image distortion correction method of the present invention.
FIG. 4 is a flowchart illustrating a distortion parameter calculating step of the virtual image distortion correcting method according to the first embodiment of the present invention.
FIG. 5 is a flowchart illustrating a first embodiment of the anti-aliasing process in the virtual image distortion correction method according to the invention.
FIG. 6 is a flowchart illustrating a distortion parameter calculating step in the virtual image distortion correcting method according to a second embodiment of the present invention.
FIG. 7 is a flowchart illustrating a second embodiment of the anti-aliasing process in the virtual image distortion correction method according to the invention.
FIG. 8 is a functional block diagram of an embodiment of a virtual image distortion correction system according to the present invention.
FIG. 9 is a schematic diagram of a virtual image distortion correcting system according to a first embodiment of the present invention.
FIG. 10 is a schematic diagram of an embodiment of lenses in the virtual image distortion correction system of the present invention.
FIG. 11 is a schematic diagram of a virtual image distortion correcting system according to a second embodiment of the present invention.
FIG. 12 is a block diagram of a processor of a virtual image distortion correction system according to a first embodiment of the present invention.
FIG. 13 is a functional block diagram of a distortion parameter calculating module of the virtual image distortion correcting system according to the first embodiment of the present invention.
FIG. 14 is a block diagram of a processor of a virtual image distortion correction system according to a second embodiment of the present invention.
FIG. 15 is a functional block diagram of a distortion parameter calculating module of a virtual image distortion correcting system according to a second embodiment of the present invention.
FIG. 16 is a block diagram of a processor of a virtual image distortion correction system according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 1 shows an embodiment of the virtual image distortion correction method of the present invention. In this embodiment, the virtual image distortion correction method includes the steps of:
in step S1, the display target virtual image is output and projected to form a target projection image.
It should be noted that the virtual image distortion correction method of the present embodiment is applied to a virtual image distortion correction system. The virtual image distortion correction system can be a head-mounted device comprising a display screen, a lens, a camera and a processor, and can also be composed of a mobile terminal and a head-mounted device, wherein the mobile terminal comprises a display screen and a processor. The head-mounted device includes a camera and a lens.
Specifically, if the output virtual image has distortion and dispersion, the display screen is used for outputting and displaying a target virtual image, and projecting the target virtual image to the lens to form a target projection image.
In step S2, a target projection image is acquired.
Specifically, after the camera is moved to the exit pupil position of the eyes of the user, the target projection image is photographed to obtain the target projection image. In the embodiment, the camera is moved to the exit pupil position of the eyes of the user, so that the photographed target projection image is basically consistent with the image seen by the eyes of the user, and the distortion parameter is calculated more accurately according to the target projection image.
And step S3, calculating distortion parameters according to the target projection image.
Further, on the basis of the above embodiment, in another embodiment, referring to fig. 2, step S3 includes:
step S31, performing trapezoidal correction processing on the target projection image to obtain a trapezoidal corrected image and a homography matrix of the trapezoidal corrected image.
And step S32, extracting the grid lines of the trapezoidal corrected image to obtain distorted grid lines.
And step S33, acquiring the maximum inscribed rectangle of the trapezoidal corrected image, and extracting the grid lines of the maximum inscribed rectangle to obtain the equally divided grid lines.
And step S34, calculating to obtain distortion parameters according to the position corresponding relation between the distorted grid lines and the network lines of the equally divided grid lines.
Specifically, referring to fig. 3, when the distortion parameter is calculated by using the virtual image distortion correction method of the present embodiment, first, four vertices of the target projection image are extracted, and the correction matrix H is used to perform the keystone correction, so that the four vertices are connected to form a rectangle, thereby obtaining the keystone-corrected image. Secondly, extracting the checkerboard edges of the trapezoidal corrected image by using an edge extraction method, and filtering to obtain smooth distorted grid lines. And thirdly, acquiring the largest inscribed rectangle in the trapezoidal corrected image, and equally dividing the inscribed rectangle n x n to obtain equally divided grid lines. And finally, calculating to obtain distortion parameters according to the position corresponding relation of the distorted grid lines and the equally-divided grid lines.
And step S4, performing inverse distortion processing on the virtual image to be output by using the distortion parameters to obtain the virtual image after the inverse distortion processing.
In step S5, the virtual image after the inverse distortion processing is output and displayed.
In the embodiment, the target projection image which is actually output is acquired, and the distortion parameter is acquired according to the target projection image, so that after the virtual image to be output is subjected to the anti-distortion processing by using the distortion parameter, the distortion and the dispersion of the output virtual image are eliminated to a great extent.
According to the position corresponding relation between the distorted grid lines and the equally divided grid lines, the distortion parameters obtained by calculation can be a parameterized method or a nonparametric method. Therefore, on the basis of the above embodiment, in another embodiment, if the distortion parameter is calculated as a parameterization method according to the position corresponding relationship between the distorted grid line and the bisected grid line, referring to fig. 4, step S34 includes:
step 3401, acquiring a first intersection A (X) in the distorted grid line0,Y0) And acquiring a second intersection a1 (X) corresponding to the first intersection a among the bisected grid lines1,Y1)。
Step S3402, calculating a distortion parameter according to formula (1):
wherein,the first intersection point A and the second intersection point A1 are obtained through sampling, a first intersection point sequence of the distorted grid lines and a second intersection point sequence of the corresponding equally-divided grid lines are obtained through acquisition, sampling samples are substituted into a formula, and distortion parameters K1, K2 and K3 are obtained through solving by using a least square method or a gradient descent method.
Further, the distortion parameter utilized in the present embodiment to perform the inverse distortion processing on the virtual image to be output may be a parametric method or a non-parametric method. Therefore, on the basis of the above embodiment, in another embodiment, if the virtual image to be output is subjected to the inverse distortion processing by using the distortion parameter as a parameterization method, referring to fig. 5, step S4 includes:
step S401, calculating first coordinate information after distortion mapping of each pixel in a virtual image to be output through a formula (1), and obtaining a first distortion-removed image according to all the first coordinate information.
Step S402, inverse transformation is carried out on the first undistorted image by utilizing the homography matrix to obtain a virtual image after inverse distortion processing.
Calculating to obtain a non-parametric method for distortion parameters according to the corresponding relationship between the distorted grid lines and the bisected grid lines, see fig. 6, and step S34, including:
step S3411, a first cross point sequence of the distorted grid lines and a second cross point sequence of the corresponding equally-divided grid lines are collected.
Step S3412, constructing a position mapping table according to the position relationship between the first cross point sequence and the second cross point sequence, where the position mapping table is a distortion parameter.
Specifically, a coordinate mapping table taking the grid point as a quadrangle is constructed according to coordinates of four vertexes of the distorted grid line and a relative position relation between the coordinates of the four vertexes of the equally-divided grid line.
The present embodiment performs inverse distortion processing on a virtual image to be output by using distortion parameters, which is a non-parametric method, with reference to fig. 7, and step S4, includes:
step S411, obtaining second coordinate information after distortion mapping of each pixel in the virtual image to be output according to the coordinate mapping table, and obtaining a second distortion-removed image according to all the second coordinate information.
Specifically, the present embodiment may implement non-parametric inverse distortion by a triangle texture mapping method to obtain a second undistorted projection image. The triangular texture mapping method does not need to map each pixel in a coordinate mapping table, so that the rate of inverse distortion is improved.
In step S412, the second undistorted image is inversely transformed by using the homography matrix to obtain a virtual image after the inverse distortion processing.
FIG. 8 illustrates one embodiment of the virtual image distortion correction system of the present invention. In the present embodiment, the virtual image distortion correction system includes a display screen 1, a lens 2, a camera 3, and a processor 4. The processor 4 is electrically connected to the display screen 1 and the camera 3, respectively.
Specifically, the display screen 1 is used to output a display target virtual image, and project the target virtual image onto the lens 2 to form a target projection image. The lens 2 is disposed in the projection direction of the display screen 1, and outputs and displays the target projection image. The camera 3 is used to acquire a target projection image. The processor 4 is configured to receive the target projection image, calculate a distortion parameter according to the target projection image, perform inverse distortion processing on the virtual image to be output by using the distortion parameter to obtain an inverse distortion processed virtual image, and transmit the inverse distortion processed virtual image to the display screen 1 for output and display.
It should be noted that the virtual image distortion correcting system of this embodiment may be a head-mounted device including a display screen, a lens, a camera, and a processor, or may be composed of a mobile terminal and a head-mounted device, where the mobile terminal includes a display screen and a processor. The head-mounted device includes a camera and a lens.
To describe the technical solution of the present invention in more detail, the present invention will be described in detail by taking a virtual image distortion correction system constituted by a mobile terminal and a head-mounted device as an example.
Referring to fig. 9, the virtual image distortion correction system includes a mobile terminal 10 and a head mounted device 20 used in cooperation with the mobile terminal 10. When used cooperatively, the mobile terminal 10 is electrically connected to the head-mounted device 20.
The head-mounted device 20 includes a helmet 201, a lens 202, and a retractable camera 203. The helmet 201 has a recess 2011 for receiving the mobile terminal 10 at the top thereof. The lens 202 is disposed at the front edge of the helmet 201. The retractable camera 203 is disposed outside the sidewall of the helmet 201, and the retractable camera 203 is used for acquiring a projection image of an object on the lens 202.
The mobile terminal 10 includes a display 100 and a processor (not shown). The processor is electrically connected to the display screen 100 and the retractable camera 203, respectively. The display screen 100 is used for outputting a target virtual image, projecting the target virtual image onto a lens 202 to form a target projection image, and outputting and displaying the virtual image after the anti-distortion processing. The processor is configured to receive a target projection image sent by the scalable camera 203, calculate a distortion parameter according to the target projection image, perform inverse distortion processing on a virtual image to be output and displayed by using the distortion parameter, and output the virtual image after the inverse distortion processing to the display screen 100 for output and display.
Referring to fig. 9, when the virtual image distortion correction system of the present embodiment is used to correct a virtual image, first, the processor sends a control instruction to the display screen 100 to control the display screen 100 to output a display target virtual image, and projects the target virtual image onto the lens 202 to form a target projection image. Next, the retractable camera 203 acquires a target projection image on the lens 202 and transmits the target projection image to the processor. Thirdly, the processor calculates distortion parameters according to the target projection image. Finally, the processor performs inverse distortion processing on the virtual image to be output and displayed by using the distortion parameter, and controls the display screen 100 to output and display the virtual image to be output and displayed after the inverse distortion processing.
In the embodiment, the distortion parameter is calculated according to the target projection image actually output on the lens, so that the calculated distortion parameter is more accurate. In addition, the embodiment acquires the distortion parameter according to the target projection image, so that after the virtual image to be output is subjected to the anti-distortion processing by using the distortion parameter, the distortion and the dispersion of the output virtual image are greatly eliminated. In addition, the head-mounted device is matched with the mobile terminal for use, and the processor of the mobile terminal is responsible for distortion parameter calculation, control of an output display process and the like, so that the head-mounted device only needs to acquire a target projection image, the structure of the head-mounted device is simplified, and the design and production cost of the head-mounted device is reduced.
In the use process of the virtual image distortion correcting system of the embodiment, the user may need the head-mounted device to be compatible with two modes, namely VR virtual reality mode and AR augmented reality mode. Therefore, in addition to the above embodiments, in other embodiments, referring to fig. 10, the lens 202 includes a reflective coating 2020, an electrochromic glass concave lens 2021 and a protective glass 2022, the reflective coating 2020 is disposed on the inner side of the electrochromic glass concave lens 2021, the protective glass 2022 is disposed on the outer side of the electrochromic glass concave lens 2021, and the electrochromic glass concave lens 2021 is electrically connected to the processor; when receiving an AR mode entering signal, the processor controls a current signal of the electrochromic glass concave lens 2021, so that the electrochromic glass concave lens 2021 is in a transparent state, and the superposition display of the external scene and the first virtual image in the AR mode is realized; when receiving the VR mode entering signal, the processor controls the current signal of the electrochromic glass concave lens 2021, so that the electrochromic glass concave lens 2021 is in a non-transparent state, and outputs and displays a second virtual image in the VR mode.
In this embodiment, the display switching between the VR virtual reality mode and the AR augmented reality mode is realized through the electrochromic glass concave lens 2021, so that the VR virtual reality mode and the AR augmented reality mode are compatible. In addition, the protective glass 2022 is arranged outside the electrochromic glass concave lens 2021, so that the hardness of the lens is improved, the lens is not easy to damage, and the electrochromic glass concave lens 2021 cannot be scratched and the like. In addition, the reflective coating 2020 is arranged on the inner side of the electrochromic glass concave lens 2021, so that light rays which are harmful to eyes are filtered, and the damage of some light rays to the eyes is avoided.
Further, there are various ways to generate the AR mode entering signal or the VR mode entering signal, and in order to describe the technical solution of the present invention in more detail, this embodiment will be described in detail with respect to several main ways.
1. Mode selection function key
Referring to fig. 11, a mode selection function button 2012 is also provided on the helmet 201.
It should be noted that there are various ways of generating the AR mode entering signal or the VR mode entering signal through the mode selection function key 2012, and in order to describe the technical solution of the present invention in more detail, the following description will be given taking an example of determining whether the pressing number is the AR mode entering signal or the VR mode entering signal.
When the number of times that the user presses the mode selection function key 2012 is odd, an AR mode entering signal is generated; when the number of times the user presses the mode selection function key 2012 is even, a VR mode enter signal is generated.
2. Gesture
The retractable camera 203 is further configured to obtain a target gesture of the user, and the processor is further configured to generate an AR mode entry signal or a VR mode entry signal corresponding to the target gesture.
Specifically, when the telescopic camera 203 acquires a V-shaped gesture, an AR mode entry signal is generated; when the telescopic camera 203 acquires the "OK" gesture, a VR mode entry signal is generated.
3. Acceleration sensor
The helmet 201 is further provided with an acceleration sensor (not shown in the figure) for acquiring the sensing signal and electrically connected to the processor, and the processor is further configured to generate an AR mode entering signal or a VR mode entering signal corresponding to the sensing signal.
In particular, the angular velocity sensor is used to identify a user-specific motion pattern comprising two consecutive shakes, two consecutive nods, etc.
When the processor analyzes the induction signal to obtain that the motion mode of the user is continuous twice shaking, generating an AR mode entering signal; and when the processor analyzes the induction signal to obtain that the movement mode of the user is the continuous twice nodding, generating a VR mode entering signal.
On the basis of the above embodiment, in other embodiments, referring to fig. 12, the processor 4 includes a first keystone correction module 40, a distorted grid line obtaining module 41, an equally divided grid line obtaining module 42, and a distortion parameter calculating module 43.
The first trapezoidal correction module 40 is configured to perform trapezoidal correction processing on the target projection image to obtain a trapezoidal corrected image and a homography matrix of the trapezoidal corrected image; a distorted grid line obtaining module 41, configured to extract grid lines of the trapezoidal corrected image to obtain distorted grid lines; an equally-divided grid line obtaining module 42, configured to obtain a maximum inscribed rectangle of the trapezoidal-corrected image, and extract grid lines of the maximum inscribed rectangle to obtain equally-divided grid lines; and a distortion parameter calculation module 43, configured to calculate a distortion parameter according to a position corresponding relationship between the distorted grid lines and the network lines of the equally-divided grid lines.
On the basis of the above-described embodiment, in another embodiment, referring to fig. 13, the distortion parameter calculation module 43 includes an acquisition unit 4301 and a first distortion parameter calculation unit 4302.
Wherein, the obtaining unit 4301 is configured to obtain a first intersection point a (X) in the distorted grid line0,Y0) And acquiring a second intersection a1 (X) corresponding to the first intersection a among the bisected grid lines1,Y1) (ii) a A first distortion parameter calculation unit 4302 for calculating a distortion parameter according to equation (1):
wherein,the first intersection point A and the second intersection point A1 are obtained through sampling, a first intersection point sequence of the distorted grid lines and a second intersection point sequence of the corresponding equally-divided grid lines are obtained through acquisition, sampling samples are substituted into a formula, and distortion parameters K1, K2 and K3 are obtained through solving by using a least square method or a gradient descent method.
In addition to the above embodiments, in other embodiments, referring to fig. 14, the processor 4 further includes a first distortion removal module 50 and a second keystone correction module 51.
The first distortion removing module 50 is configured to calculate, by using formula (1), first coordinate information after distortion mapping of each pixel in the virtual image to be output, and obtain a first distortion removed image according to all the first coordinate information; and a second keystone correction module 51 for inverse transforming the first undistorted image using the homography matrix to obtain an inverse-distorted virtual image.
On the basis of the above embodiment, in other embodiments, referring to fig. 15, the distortion parameter calculation module 43 includes an acquisition unit 4311 and a second distortion parameter calculation unit 4312.
The acquisition unit 4311 is configured to acquire a first cross point sequence of the distorted grid lines and a second cross point sequence of the corresponding equally-divided grid lines; the second distortion parameter calculating unit 4312 is configured to construct a position mapping table according to a position relationship between the first cross point sequence and the second cross point sequence, where the position mapping table is a distortion parameter.
In addition to the above embodiments, in other embodiments, referring to fig. 16, the processor 4 further includes a second distortion removal module 60 and a third keystone correction module 61.
The second distortion removing module 60 is configured to obtain second coordinate information after distortion mapping of each pixel in the virtual image to be output according to the coordinate mapping table, and obtain a second distortion removed image according to all the second coordinate information;
and a third keystone correction module 61, configured to perform inverse transformation on the second undistorted image by using the homography matrix to obtain an inverse-distorted virtual image.
For other details of the technical solution implemented by each module in the virtual image distortion correcting system in the six embodiments, reference may be made to the description of the virtual image distortion correcting method in the embodiments, and details are not repeated here.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the terminal class embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for relevant points, reference may be made to part of the description of the method embodiment.
The above detailed description of the embodiments of the present invention is provided as an example, and the present invention is not limited to the above described embodiments. It will be apparent to those skilled in the art that any equivalent modifications or substitutions can be made within the scope of the present invention, and thus, equivalent changes and modifications, improvements, etc. made without departing from the spirit and scope of the present invention should be included in the scope of the present invention.
Claims (12)
1. A virtual image distortion correction method is characterized by comprising the following steps:
outputting and displaying a target virtual image, and projecting the target virtual image to form a target projection image;
acquiring the target projection image;
calculating to obtain distortion parameters according to the target projection image;
carrying out inverse distortion processing on the virtual image to be output by using the distortion parameters to obtain an inverse distortion processed virtual image;
and outputting and displaying the virtual image after the anti-distortion processing.
2. The virtual image distortion correction method of claim 1, wherein the step of calculating distortion parameters from the target projection image comprises:
performing trapezoidal correction processing on the target projection image to obtain a trapezoidal corrected image and a homography matrix of the trapezoidal corrected image;
extracting the grid lines of the trapezoidal corrected image to obtain distorted grid lines;
acquiring the maximum inscribed rectangle of the trapezoidal corrected image, and extracting the grid lines of the maximum inscribed rectangle to obtain equally divided grid lines;
and calculating to obtain the distortion parameter according to the position corresponding relation between the distorted grid lines and the network lines of the equally-divided grid lines.
3. The method for correcting distortion of a virtual image according to claim 2, wherein the step of calculating the distortion parameter based on the correspondence between the distorted grid lines and the network lines of the equally divided grid lines comprises:
obtaining a first intersection A (X) in the distorted grid lines0,Y0) And acquiring a second intersection point a1 (X) corresponding to the first intersection point a among the bisected grid lines1,Y1);
The distortion parameter is calculated according to equation (1):
wherein,the first intersection point A and the second intersection point A1 are obtained by sampling, and a first intersection point sequence of distorted grid lines and corresponding equally-divided grid lines thereof are obtained by collectionThe sampling samples are substituted into a formula, and distortion parameters K1, K2 and K3 are obtained through solving.
4. The virtual image distortion correction method according to claim 3, wherein the step of performing inverse distortion processing on the virtual image to be output by using the distortion parameter to obtain an inverse-distortion-processed virtual image includes:
calculating first coordinate information after distortion mapping of each pixel in the virtual image to be output through the formula (1), and obtaining a first distortion-removed image according to all the first coordinate information;
and performing inverse transformation on the first undistorted image by using the homography matrix to obtain a virtual image after the inverse distortion processing.
5. The method for correcting distortion of a virtual image according to claim 2, wherein the step of calculating the distortion parameter based on the correspondence between the distorted grid lines and the network lines of the equally divided grid lines comprises:
acquiring a first cross point sequence of the distorted grid lines and a second cross point sequence of the corresponding equally-divided grid lines;
and constructing a position mapping table according to the position relation between the first cross point sequence and the second cross point sequence, wherein the position mapping table is a distortion parameter.
6. The virtual image distortion correction method according to claim 5, wherein the step of performing inverse distortion processing on the virtual image to be output by using the distortion parameter to obtain an inverse-distortion-processed virtual image includes:
obtaining second coordinate information after distortion mapping of each pixel in the virtual image to be output according to the coordinate mapping table, and obtaining a second distortion-removed image according to all the second coordinate information;
and performing inverse transformation on the second undistorted image by using the homography matrix to obtain the virtual image after the inverse distortion processing.
7. A virtual image distortion correction system, comprising:
the display screen is electrically connected with the processor and used for outputting and displaying a target virtual image and projecting the target virtual image;
the lens is arranged in the projection direction of the display screen and used for outputting and displaying a target projection image corresponding to the target virtual image;
the camera is electrically connected with the processor and used for acquiring the target projection image;
the processor is used for receiving the target projection image, calculating a distortion parameter according to the target projection image, performing inverse distortion processing on a virtual image to be output by using the distortion parameter to obtain an inverse distortion processed virtual image, and transmitting the inverse distortion processed virtual image to the display screen for output and display.
8. The virtual image distortion correction system of claim 7, wherein the processor comprises:
the first trapezoidal correction module is used for carrying out trapezoidal correction processing on the target projection image to obtain a trapezoidal corrected image and a homography matrix of the trapezoidal corrected image;
the distorted grid line obtaining module is used for extracting the grid lines of the trapezoidal corrected image to obtain distorted grid lines;
an equally-divided grid line obtaining module, configured to obtain a largest inscribed rectangle in the trapezoidal-corrected image, and extract grid lines of the largest inscribed rectangle to obtain equally-divided grid lines;
and the distortion parameter calculation module is used for calculating to obtain the distortion parameter according to the position corresponding relation between the distorted grid lines and the network lines of the equally-divided grid lines.
9. The virtual image distortion correction system of claim 8, wherein the distortion parameter calculation module comprises:
an acquisition unit for acquiring a first intersection A (X) in the distorted grid lines0,Y0) And acquiring a second intersection point a1 (X) corresponding to the first intersection point a among the bisected grid lines1,Y1);
A first distortion parameter calculation unit for calculating a distortion parameter according to formula (1):
wherein,the first intersection point A and the second intersection point A1 are obtained through sampling, a first intersection point sequence of the distorted grid lines and a second intersection point sequence of the corresponding equally-divided grid lines are obtained through collection, sampling samples are substituted into a formula, and distortion parameters K1, K2 and K3 are obtained through solving.
10. The virtual image distortion correction system of claim 9, wherein the processor further comprises:
the first distortion removal module is used for calculating first coordinate information after distortion mapping of each pixel in the virtual image to be output through the formula (1), and obtaining a first distortion removal image according to all the first coordinate information;
and the second trapezoidal correction module is used for performing inverse transformation on the first undistorted image by using the homography matrix to obtain a virtual image after inverse distortion processing.
11. The virtual image distortion correction system of claim 8, wherein the distortion parameter calculation module comprises:
the acquisition unit is used for acquiring a first cross point sequence of the distorted grid lines and a second cross point sequence of the corresponding equally-divided grid lines;
and the second distortion parameter calculation unit is used for constructing a position mapping table according to the position relation between the first cross point sequence and the second cross point sequence, wherein the position mapping table is a distortion parameter.
12. The virtual image distortion correction system of claim 11, wherein the processor further comprises:
the second distortion removing module is used for obtaining second coordinate information after distortion mapping of each pixel in the virtual image to be output according to the coordinate mapping table and obtaining a second distortion removed image according to all the second coordinate information;
and the third trapezoidal correction module is used for performing inverse transformation on the second undistorted image by using the homography matrix to obtain the virtual image after the inverse distortion processing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710340695.1A CN108876725A (en) | 2017-05-12 | 2017-05-12 | A kind of virtual image distortion correction method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710340695.1A CN108876725A (en) | 2017-05-12 | 2017-05-12 | A kind of virtual image distortion correction method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108876725A true CN108876725A (en) | 2018-11-23 |
Family
ID=64320532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710340695.1A Pending CN108876725A (en) | 2017-05-12 | 2017-05-12 | A kind of virtual image distortion correction method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108876725A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109688392A (en) * | 2018-12-26 | 2019-04-26 | 联创汽车电子有限公司 | AR-HUD optical projection system and mapping relations scaling method and distortion correction method |
CN109754380A (en) * | 2019-01-02 | 2019-05-14 | 京东方科技集团股份有限公司 | A kind of image processing method and image processing apparatus, display device |
CN109799073A (en) * | 2019-02-13 | 2019-05-24 | 京东方科技集团股份有限公司 | A kind of optical distortion measuring device and method, image processing system, electronic equipment and display equipment |
CN109993713A (en) * | 2019-04-04 | 2019-07-09 | 百度在线网络技术(北京)有限公司 | Vehicle-mounted head-up display system pattern distortion antidote and device |
CN110827214A (en) * | 2019-10-18 | 2020-02-21 | 南京睿悦信息技术有限公司 | Method for automatically calibrating and generating off-axis anti-distortion texture coordinates |
CN110866867A (en) * | 2019-11-18 | 2020-03-06 | 深圳传音控股股份有限公司 | Terminal mirror image display method, terminal and computer readable storage medium |
CN110996081A (en) * | 2019-12-06 | 2020-04-10 | 北京一数科技有限公司 | Projection picture correction method and device, electronic equipment and readable storage medium |
CN112164378A (en) * | 2020-10-28 | 2021-01-01 | 上海盈赞通信科技有限公司 | VR glasses all-in-one machine anti-distortion method and device |
CN112288651A (en) * | 2020-10-28 | 2021-01-29 | 上海盈赞通信科技有限公司 | Method and device for rapidly realizing video anti-distortion |
CN112785530A (en) * | 2021-02-05 | 2021-05-11 | 广东九联科技股份有限公司 | Image rendering method, device and equipment for virtual reality and VR equipment |
WO2021238564A1 (en) * | 2020-05-28 | 2021-12-02 | 京东方科技集团股份有限公司 | Display device and distortion parameter determination method, apparatus and system thereof, and storage medium |
WO2022133953A1 (en) * | 2020-12-24 | 2022-06-30 | 京东方科技集团股份有限公司 | Image distortion processing method and apparatus |
US11633235B2 (en) * | 2017-07-31 | 2023-04-25 | Children's National Medical Center | Hybrid hardware and computer vision-based tracking system and method |
CN117014589A (en) * | 2023-09-27 | 2023-11-07 | 北京凯视达科技股份有限公司 | Projection method, projection device, electronic equipment and storage medium |
WO2024040398A1 (en) * | 2022-08-22 | 2024-02-29 | 京东方科技集团股份有限公司 | Correction function generation method and apparatus, and image correction method and apparatus |
WO2024183694A1 (en) * | 2023-03-07 | 2024-09-12 | 北京字跳网络技术有限公司 | Image processing method and apparatus, and device, computer-readable storage medium and product |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102611822A (en) * | 2012-03-14 | 2012-07-25 | 海信集团有限公司 | Projector and projection image rectifying method thereof |
CN103247031A (en) * | 2013-04-19 | 2013-08-14 | 华为技术有限公司 | Method, terminal and system for correcting aberrant image |
US9134593B1 (en) * | 2010-12-23 | 2015-09-15 | Amazon Technologies, Inc. | Generation and modulation of non-visible structured light for augmented reality projection system |
CN106056560A (en) * | 2015-04-03 | 2016-10-26 | 康耐视公司 | Homography rectification |
CN106127714A (en) * | 2016-07-01 | 2016-11-16 | 南京睿悦信息技术有限公司 | A kind of measuring method of virtual reality head-mounted display equipment distortion parameter |
CN106162124A (en) * | 2016-08-02 | 2016-11-23 | 上海唱风信息科技有限公司 | The calibration steps of scialyscope output image |
CN106447602A (en) * | 2016-08-31 | 2017-02-22 | 浙江大华技术股份有限公司 | Image mosaic method and device |
CN106507077A (en) * | 2016-11-28 | 2017-03-15 | 江苏鸿信系统集成有限公司 | Projecting apparatus picture based on graphical analysis is corrected and blocks preventing collision method |
CN106527857A (en) * | 2016-10-10 | 2017-03-22 | 成都斯斐德科技有限公司 | Virtual reality-based panoramic video interaction method |
-
2017
- 2017-05-12 CN CN201710340695.1A patent/CN108876725A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9134593B1 (en) * | 2010-12-23 | 2015-09-15 | Amazon Technologies, Inc. | Generation and modulation of non-visible structured light for augmented reality projection system |
CN102611822A (en) * | 2012-03-14 | 2012-07-25 | 海信集团有限公司 | Projector and projection image rectifying method thereof |
CN103247031A (en) * | 2013-04-19 | 2013-08-14 | 华为技术有限公司 | Method, terminal and system for correcting aberrant image |
CN106056560A (en) * | 2015-04-03 | 2016-10-26 | 康耐视公司 | Homography rectification |
CN106127714A (en) * | 2016-07-01 | 2016-11-16 | 南京睿悦信息技术有限公司 | A kind of measuring method of virtual reality head-mounted display equipment distortion parameter |
CN106162124A (en) * | 2016-08-02 | 2016-11-23 | 上海唱风信息科技有限公司 | The calibration steps of scialyscope output image |
CN106447602A (en) * | 2016-08-31 | 2017-02-22 | 浙江大华技术股份有限公司 | Image mosaic method and device |
CN106527857A (en) * | 2016-10-10 | 2017-03-22 | 成都斯斐德科技有限公司 | Virtual reality-based panoramic video interaction method |
CN106507077A (en) * | 2016-11-28 | 2017-03-15 | 江苏鸿信系统集成有限公司 | Projecting apparatus picture based on graphical analysis is corrected and blocks preventing collision method |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11633235B2 (en) * | 2017-07-31 | 2023-04-25 | Children's National Medical Center | Hybrid hardware and computer vision-based tracking system and method |
CN109688392B (en) * | 2018-12-26 | 2021-11-02 | 联创汽车电子有限公司 | AR-HUD optical projection system, mapping relation calibration method and distortion correction method |
CN109688392A (en) * | 2018-12-26 | 2019-04-26 | 联创汽车电子有限公司 | AR-HUD optical projection system and mapping relations scaling method and distortion correction method |
CN109754380A (en) * | 2019-01-02 | 2019-05-14 | 京东方科技集团股份有限公司 | A kind of image processing method and image processing apparatus, display device |
US11435289B2 (en) | 2019-02-13 | 2022-09-06 | Beijing Boe Optoelectronics Technology Co., Ltd. | Optical distortion measuring apparatus and optical distortion measuring method, image processing system, electronic apparatus and display apparatus |
CN109799073A (en) * | 2019-02-13 | 2019-05-24 | 京东方科技集团股份有限公司 | A kind of optical distortion measuring device and method, image processing system, electronic equipment and display equipment |
CN109993713A (en) * | 2019-04-04 | 2019-07-09 | 百度在线网络技术(北京)有限公司 | Vehicle-mounted head-up display system pattern distortion antidote and device |
CN110827214A (en) * | 2019-10-18 | 2020-02-21 | 南京睿悦信息技术有限公司 | Method for automatically calibrating and generating off-axis anti-distortion texture coordinates |
CN110866867A (en) * | 2019-11-18 | 2020-03-06 | 深圳传音控股股份有限公司 | Terminal mirror image display method, terminal and computer readable storage medium |
CN110866867B (en) * | 2019-11-18 | 2024-02-20 | 深圳传音控股股份有限公司 | Terminal mirror image display method, terminal and computer readable storage medium |
CN110996081B (en) * | 2019-12-06 | 2022-01-21 | 北京一数科技有限公司 | Projection picture correction method and device, electronic equipment and readable storage medium |
CN110996081A (en) * | 2019-12-06 | 2020-04-10 | 北京一数科技有限公司 | Projection picture correction method and device, electronic equipment and readable storage medium |
WO2021238564A1 (en) * | 2020-05-28 | 2021-12-02 | 京东方科技集团股份有限公司 | Display device and distortion parameter determination method, apparatus and system thereof, and storage medium |
CN112164378A (en) * | 2020-10-28 | 2021-01-01 | 上海盈赞通信科技有限公司 | VR glasses all-in-one machine anti-distortion method and device |
CN112288651A (en) * | 2020-10-28 | 2021-01-29 | 上海盈赞通信科技有限公司 | Method and device for rapidly realizing video anti-distortion |
US11854170B2 (en) | 2020-12-24 | 2023-12-26 | Beijing Boe Optoelectronics Technology Co., Ltd. | Method and apparatus of processing image distortion |
WO2022133953A1 (en) * | 2020-12-24 | 2022-06-30 | 京东方科技集团股份有限公司 | Image distortion processing method and apparatus |
CN112785530A (en) * | 2021-02-05 | 2021-05-11 | 广东九联科技股份有限公司 | Image rendering method, device and equipment for virtual reality and VR equipment |
CN112785530B (en) * | 2021-02-05 | 2024-05-24 | 广东九联科技股份有限公司 | Image rendering method, device and equipment for virtual reality and VR equipment |
WO2024040398A1 (en) * | 2022-08-22 | 2024-02-29 | 京东方科技集团股份有限公司 | Correction function generation method and apparatus, and image correction method and apparatus |
WO2024183694A1 (en) * | 2023-03-07 | 2024-09-12 | 北京字跳网络技术有限公司 | Image processing method and apparatus, and device, computer-readable storage medium and product |
CN117014589A (en) * | 2023-09-27 | 2023-11-07 | 北京凯视达科技股份有限公司 | Projection method, projection device, electronic equipment and storage medium |
CN117014589B (en) * | 2023-09-27 | 2023-12-19 | 北京凯视达科技股份有限公司 | Projection method, projection device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108876725A (en) | A kind of virtual image distortion correction method and system | |
EP3070680B1 (en) | Image-generating device and method | |
US9651782B2 (en) | Wearable tracking device | |
US8817017B2 (en) | 3D digital painting | |
CN109801379B (en) | Universal augmented reality glasses and calibration method thereof | |
JP5632100B2 (en) | Facial expression output device and facial expression output method | |
US9979946B2 (en) | I/O device, I/O program, and I/O method | |
US9933853B2 (en) | Display control device, display control program, and display control method | |
US9440484B2 (en) | 3D digital painting | |
CN104536579A (en) | Interactive three-dimensional scenery and digital image high-speed fusing processing system and method | |
WO2014128751A1 (en) | Head mount display apparatus, head mount display program, and head mount display method | |
US10171800B2 (en) | Input/output device, input/output program, and input/output method that provide visual recognition of object to add a sense of distance | |
CN112929651A (en) | Display method, display device, electronic equipment and storage medium | |
JP6708444B2 (en) | Image processing apparatus and image processing method | |
CN111488056A (en) | Manipulating virtual objects using tracked physical objects | |
TWI501193B (en) | Computer graphics using AR technology. Image processing systems and methods | |
US10296098B2 (en) | Input/output device, input/output program, and input/output method | |
CN111491159A (en) | Augmented reality display system and method | |
WO2019048819A1 (en) | A method of modifying an image on a computational device | |
JP6168597B2 (en) | Information terminal equipment | |
JP5759439B2 (en) | Video communication system and video communication method | |
KR102534449B1 (en) | Image processing method, device, electronic device and computer readable storage medium | |
JP2015184986A (en) | Compound sense of reality sharing device | |
CN115311133A (en) | Image processing method and device, electronic equipment and storage medium | |
US20170302904A1 (en) | Input/output device, input/output program, and input/output method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181123 |
|
RJ01 | Rejection of invention patent application after publication |