[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111047709B - Binocular vision naked eye 3D image generation method - Google Patents

Binocular vision naked eye 3D image generation method Download PDF

Info

Publication number
CN111047709B
CN111047709B CN201911202342.0A CN201911202342A CN111047709B CN 111047709 B CN111047709 B CN 111047709B CN 201911202342 A CN201911202342 A CN 201911202342A CN 111047709 B CN111047709 B CN 111047709B
Authority
CN
China
Prior art keywords
image
binocular
camera
eye
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911202342.0A
Other languages
Chinese (zh)
Other versions
CN111047709A (en
Inventor
黄书强
狄红卫
江秀美
尹红宽
杜红涛
郑晓洁
王春旭
彭文涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
Original Assignee
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan University filed Critical Jinan University
Priority to CN201911202342.0A priority Critical patent/CN111047709B/en
Publication of CN111047709A publication Critical patent/CN111047709A/en
Application granted granted Critical
Publication of CN111047709B publication Critical patent/CN111047709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses a binocular vision naked eye 3D image generation method, which comprises the following steps: acquiring an original binocular image pair of color and gray through an acquisition device; calibrating the acquisition devices to the acquired original binocular image pair to obtain the relative positions among the acquisition devices; performing binocular correction on the original binocular image through the relative positions of the acquisition devices to obtain corrected binocular images, wherein the corrected binocular images comprise a black-white right-eye image and a color left-eye image; performing binocular matching on the corrected binocular image by using a binocular matching algorithm to generate a parallax image; drawing images of black and white right eye images by utilizing parallax images to generate color right eye images, synthesizing a color left eye image and the color right eye image into one image, and performing pixel interleaving to generate a final binocular image; the invention adopts a binocular matching method with high matching precision, reduces hardware cost, and can be applied to real-time scenes with higher precision and comfort.

Description

Binocular vision naked eye 3D image generation method
Technical Field
The invention relates to the field of research of image processing, in particular to a binocular vision naked eye 3D image generation method.
Background
With the development of video cameras and computer technology, computer science has become the goal of more and more researchers to understand and interpret images like humans, and the discipline of computer vision has emerged. The naked eye 3D display technology is the front end of the current display technology and is also a trend of future development. At present, a mode of realizing 3D shooting and manufacturing by adopting a binocular camera gradually becomes a mainstream, wherein the efficient and low-cost conversion of binocular images into naked eye 3D display contents becomes a research hot spot, but the important point and difficulty lie in realizing a binocular matching algorithm with high matching precision and high speed.
Since the beginning of the 70 s of the last century, naked eye 3D display technology has been one of the active exploration and pursuit directions in the industry, and the industry has been on the rise of a hot trend for naked eye 3D display technology a few years ago. Binocular matching based on visual computing theory is to generate a stereoscopic image with depth by using two parallactic plane diagrams, and the binocular matching can be divided into a global algorithm and a local algorithm according to different optimization methods. The global algorithm has higher matching precision, but the algorithm has high complexity and can not meet the real-time requirement. The local algorithm gathers the cost by solving the cost sum or average value in a supporting area, and finally, winners' general eating (WTA) is adopted to select the parallax optimal value, so that the local algorithm is low in complexity and strong in real-time performance, and the matching accuracy is lower than that of the global algorithm. For naked eye 3D technology, 3D screen technology is the hardware basis, and 3D display content is the core. Without a sufficient amount of 3D display content and high quality stereoscopic display effect, naked eye 3D technology is difficult to develop. At present, a mode of realizing 3D shooting and manufacturing by adopting a binocular camera gradually becomes a mainstream, and how to efficiently convert binocular images into naked eye 3D display contents with low cost becomes a research hot spot.
The traditional 2D display technology can only display plane images, cannot experience depth stereoscopic impression, and cannot meet pursuit of people on vivid and stereoscopic vision. However, the application field of the traditional 3D technology is limited, and the fidelity is insufficient and the display quality is poor. Real-time matching cannot be achieved and the method is applied to real-time scenes. The three-dimensional display effect of the existing 3D video is uneven, the 3D video with good three-dimensional display effect often needs complex manufacturing steps and high manufacturing cost, the 3D resource is deficient, and the 3D video becomes an important factor for restricting the development of naked eye 3D technology. The key point and the difficulty of the conventional research of binocular stereoscopic vision are how to improve the matching precision of the binocular matching algorithm, and the conventional algorithm with higher binocular matching precision has large general calculation amount and time consumption, and cannot realize real-time matching and be applied to real-time scenes.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art and provide a binocular vision naked eye 3D image generation method, which is used for developing a stereoscopic vision system with high precision and high speed and a binocular matching algorithm with high matching precision, so that the expensive hardware cost of the traditional 3D display technology is reduced, and the naked eye 3D display technology can be applied to real-time scenes with higher precision and comfort.
The aim of the invention is achieved by the following technical scheme:
the binocular vision naked eye 3D image generation method is characterized by comprising the following steps of:
acquiring an original binocular image pair of color and gray through an acquisition device;
calibrating the acquisition devices to the acquired original binocular image pair to obtain the relative positions among the acquisition devices;
performing binocular correction on the original binocular image through the relative positions of the acquisition devices to obtain corrected binocular images, wherein the corrected binocular images comprise a black-white right-eye image and a color left-eye image;
performing binocular matching on the corrected binocular image by using a binocular matching algorithm to generate a parallax image;
and (3) performing image drawing on the black-white right-eye image by utilizing the parallax image to generate a color right-eye image, synthesizing the color left-eye image and the color right-eye image into one image, and performing pixel interleaving to generate a final binocular image.
Further, the image acquisition is performed by the acquisition device, specifically: shooting by adopting a double-shooting naked eye 3D image acquisition device, firstly opening a main camera, performing parameter setting and prediction, shooting, detecting whether the camera is the main camera, if so, opening a secondary camera, performing parameter setting and prediction, and shooting; if not, the main camera is opened, parameter setting and prediction are carried out, shooting is carried out, an original binocular image is obtained, the color image is obtained by the main camera, and the black-and-white image is obtained by the auxiliary camera.
Further, the calibration of the acquisition device is specifically as follows: monocular calibration is respectively carried out on the main camera and the auxiliary camera of the image acquisition device, after respective camera parameters are obtained, double-target calibration is carried out, and the relative position relationship between the main camera and the auxiliary camera of the image acquisition device is determined; the method comprises the steps of calibrating an image acquisition device by using a binocular camera calibration tool, setting a checkerboard template as an image projection screen, defining Z coordinates of points on the checkerboard template in a world coordinate system to be 0, taking the first inner corner point of the upper left corner as an origin, enabling the checkerboard image to occupy a main picture of a lens of the image acquisition device, acquiring checkerboard image pairs from different angles, guiding the acquired checkerboard image pairs into the double-target calibration tool, inputting the actual size of the checkerboard, searching for corner points, acquiring qualified image pairs for calculating camera parameters, obtaining the relative position relation of a main camera and a secondary camera of the image acquisition device, namely acquiring a plurality of sets of checkerboard image pairs from different angles, guiding the acquired checkerboard image pairs into the double-target calibration tool, inputting the actual size of the checkerboard, searching for the corner points, wherein only the corner points extracted by the left and right checkerboard images can be in one-to-one correspondence, calculating the qualified image pairs of the camera parameters by selecting 2 radial distortion coefficients and 2 tangential distortion coefficient distortion models, and calculating the camera parameters, thereby obtaining the relative position relation of the main camera and the secondary camera of the image acquisition device.
Further, the binocular correction is specifically: the acquired original binocular image is subjected to binocular correction by using a boudouet polar correction algorithm, and a correction rotation matrix R 'of a main camera is obtained by calculating a stereoRectify function by using camera parameters obtained by camera calibration and the relative position relationship between the main camera and the auxiliary camera of an image acquisition device' l And a correction rotation matrix R 'of the auxiliary camera' r Projection matrix P of main camera l Projection matrix P of auxiliary camera r And a re-projection matrix Q; obtaining calibration mapping parameters of the left-eye image and the right-eye image by adopting an initUndicator election map function; and obtaining corrected binocular images through the calibration mapping parameters of the left and right binocular images and the remap function mapping. The method comprises the following steps: make a video of the main and auxiliaryThe rotation matrix R between the heads is divided into a projection matrix R of the main camera l And projection matrix R of auxiliary camera r The two cameras are rotated by half respectively, and a transformation matrix R is constructed through a translation vector T between the cameras c ,R c =[e 1 T e 2 T e 3 T ]' wherein the pole direction is the same as the translation vector T direction, e 2 Direction and main optical axis direction sum e 1 Orthogonalization, e 3 Direction and e 2 And e 1 Orthogonalization, calculate and get the correction rotation matrix R 'of the main camera' l And a correction rotation matrix R 'of the auxiliary camera' r
Figure BDA0002296180960000031
The optical centers of the main camera and the auxiliary camera before correction are not parallel, the connecting line of the two optical centers is called a base line, the intersection point of the image plane and the base line is a pole, the straight line of the image point and the pole is an polar line, and the plane formed by the left polar line and the base line and the right polar line is a polar plane corresponding to the space point.
Further, the binocular matching specifically includes: the corrected binocular image also needs to correct the error matching points existing in the shielding area and the parallax jump boundary, the error matching points are searched through left and right disposable detection aiming at the error matching points of the shielding area, the parallax of the corresponding points of the left and right images is set to be 0 for the corrected binocular image, otherwise, the corrected binocular image is considered as the error matching point, the detected error matching points search for first correct matching points in two horizontal directions respectively, and the error parallax value of the shielding area is replaced by a smaller parallax value; and aiming at the mismatching point existing on the parallax jump boundary. For each pixel point at the boundary, selecting a left pixel point and a right pixel point, wherein the parallax values corresponding to the left pixel point and the right pixel point are respectively the sum, and the parallax values of the pixel points are as follows:
Figure BDA0002296180960000032
wherein q r 、q l The first correct matching point found horizontally to the left and right after the detection of the mismatching point corresponds to the disparity value D (q r )、D(q l );D(q r )、D(q l ) Respectively q r 、q l The corresponding parallax value D (p) is the parallax value of the pixel point; c (p, q) is an input matching cost graph to be filtered;
and the mismatching of the self-adaptive window at the texture edge with continuous parallax is eliminated by using the guide filtering, the area for constructing the self-adaptive window and the parallax cost aggregation space are reduced by using the initialized parallax information, the calculation time is reduced, the matching precision of the weak texture area is enhanced by using the self-adaptive window, and the edge characteristic of the parallax map is maintained.
The binocular image acquired by the color and gray scale combined double cameras is converted into a uniform gray scale image after camera calibration and binocular correction; and matching the binocular gray scale images by using a three-dimensional binocular matching algorithm, namely performing an image drawing process to generate parallax images.
Further, the image drawing specifically includes: drawing a color right-eye image by utilizing depth information, multiplying a parallax image obtained by binocular matching by a coefficient b to be used as a remapping relation image, and remapping a color left-eye image to obtain color right-eye images with different parallax values, so as to synthesize 3D images with different stereoscopic impression degrees; multiplying the left parallax image and the right parallax image by a coefficient b to be used as remapped pixel coordinate conversion relation images, and drawing color right eye images by forward mapping and backward mapping respectively by utilizing the color left eye images; and synthesizing the color left-eye image and the drawn color right-eye image into a 3D image, and performing pixel interleaving by an interleaving algorithm, namely synthesizing the pixel interleaving by the interleaving algorithm of the system, and displaying the pixel interleaving on a naked eye 3D display screen.
Further, the color right eye images of different disparity values include forward mapping a new right eye image and backward mapping a right eye image.
Further, the coefficient b, when the coefficient b is less than 1.5, draws a color right-eye image using a backward mapping without hole filling; when the coefficient b is greater than 2.5, the forward mapping is used to draw the right-eye image, and then the hole filling is performed.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention provides a matching cost algorithm of gradient and improved Census transformation self-adaptive weighting fusion, and effectively improves the stability of the matching algorithm to illumination distortion. The cost aggregation algorithm combining the guided filtering and the self-adaptive window is provided, namely binocular matching is performed, and the matching precision of a low texture region and a parallax discontinuous region is improved; and acquiring binocular images by acquiring a double-camera application program of the auxiliary camera. In order to maintain the edge characteristics of the disparity map, guided filtering is adopted to aggregate matching cost; obtaining an aggregation cost through guided filtering, and selecting a parallax value corresponding to the minimum matching cost by using a WTA strategy to obtain an initial parallax map; and introducing an adaptive window to filter the matched cost map after the guided filtering again so as to obtain better edge maintaining characteristics, and obtaining left and right parallax maps to obtain higher precision.
Drawings
Fig. 1 is a flowchart of a binocular vision naked eye 3D image generation method according to the present invention;
FIG. 2 is a flow chart of original left and right image acquisition in the embodiment of the invention;
FIG. 3 is a schematic diagram of mapping and drawing a color right-eye image forward in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram of a backward mapping color right-eye image according to an embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
Examples:
a binocular vision naked eye 3D image generation method comprises the following steps:
acquiring an original binocular image pair of color and gray through an image acquisition device;
the calibration of the image acquisition devices is completed for the acquired original binocular image pair, and the relative positions among the image acquisition devices are obtained;
performing binocular correction on the original binocular image through the relative positions of the image acquisition devices to obtain corrected binocular images, wherein the corrected binocular images comprise a black-white right-eye image and a color left-eye image;
performing binocular matching on the corrected binocular image by using a binocular matching algorithm to generate a parallax image;
and (3) performing image drawing on the black-white right-eye image by utilizing the parallax image to generate a color right-eye image, synthesizing the color left-eye image and the color right-eye image into one image, and performing pixel interleaving to generate a final binocular image.
Collecting binocular images;
the acquisition of binocular images is the first step in achieving binocular stereo vision. The invention adopts a double-camera naked eye 3D mobile phone center-of-play astronomical crane 7MAX as a hardware equipment platform for experiments, and is used for collecting original binocular images and testing the three-dimensional display effect of the finally synthesized 3D images. Because the grayscale secondary camera in the mobile phone is invisible to the upper layer APP and cannot directly obtain the image of the camera through the system camera of the mobile phone, a double-camera application program capable of obtaining the secondary camera image needs to be redeveloped to obtain the original binocular image. The acquisition flow is shown in fig. 2. According to the invention, the application software of the mobile phone double-Camera is developed on the Android Studio platform by using the bottom layer Camera interface and the corresponding so library of the Zhen XingTianji 7MAX provided by Shenzhen color technology Co. Since the two images obtained by the main camera and the auxiliary camera are not both color images, the 3D images cannot be directly synthesized for display.
Calibrating a camera;
the purpose of monocular camera calibration is to calculate the internal and external parameters and distortion parameters of the cameras, and the purpose of double-target calibration is to obtain the relative position relationship between the two cameras, namely the relative position relationship between the main camera and the auxiliary camera of the image acquisition device, and the parameters required by double-target correction can be obtained through camera calibration. Monocular calibration is carried out on the two cameras respectively, double-target calibration is also carried out after respective camera parameters are obtained, and the relative position relationship of the two cameras is determined. The invention is based on a Zhang Zhengyou camera calibration method, and a binocular camera calibration tool Stereo Camera Clibrator (SCC) on a Matlab platform is used for calibrating a rear-mounted double camera of an XingTianji 7MAX in an experimental mobile phone. The experiment adopts a checkerboard template, the number of internal angles is 9 multiplied by 6, the checkerboard picture is displayed on a computer screen, and the side length of each checkerboard is 25mm. The Z coordinates of the points on the checkerboard template in the world coordinate system are defined as 0, and the first inner corner point of the upper left corner is the origin. The mobile phone is fixed by the tripod, so that the checkerboard images occupy the main picture of the lens, and the checkerboard image pairs are acquired from different angles. In order to reduce the error of camera calibration, more than ten pairs of checkerboard images need to be acquired in experiments. The image pair is imported into a double-target fixed tool SCC of Matlab, the actual size of a photographed checkerboard is input to 25mm, angular points are searched, and only when the angular points extracted from left and right checkerboard images can be in one-to-one correspondence, the image pair can be used for calculating qualified image pairs of camera parameters. In order to verify the accuracy of the calculation result, the world coordinates of the corner points are re-projected onto the original left and right images by using the calculated camera parameters, and compared with the positions of the corner points which are originally acquired, the errors between the two are re-projection errors.
Binocular correction
The invention uses a correction method of the OpenCV self based on the boudouet polar correction algorithm to carry out binocular correction on the acquired original binocular image. The camera internal parameters obtained by the calibration of the front camera and the relative position relationship between the main camera and the auxiliary camera of the image acquisition device are utilized, and the corrected rotation matrix R 'of the main camera and the auxiliary camera is obtained by adopting the stereoRectify function in OpenCV3.3' l And R'. r Projection matrix P l And P r And a re-projection matrix Q, which is a matrix for realizing the space sitting point P, the space sitting point P being represented by the world coordinates (X w ,Y w ,Z w ) A transformation matrix to pixel coordinates (u, v) representing the number of rows and columns of image points; and obtaining the calibration mapping parameters of the left and right images by adopting an initUndicator electifyMap function. Finally, obtaining calibrated left and right images by using respective calibration mapping parameters and remap function mapping of the left and right images; the method comprises the following steps: two camerasThe rotation matrix R between the two is divided into a projection matrix R of the main camera l And projection matrix R of auxiliary camera r The two cameras are rotated by half respectively, and a transformation matrix R is constructed through a translation vector T between the cameras c ,R c =[e 1 T e 2 T e 3 T ]' wherein the pole direction is the same as the translation vector T direction, e 2 Direction and main optical axis direction sum e 1 Orthogonalization, e 3 Direction and e 2 And e 1 Orthogonalization, calculation:
Figure BDA0002296180960000061
the optical centers of the main camera and the auxiliary camera before correction are not parallel, the connecting line of the two optical centers is called a base line, the intersection point of the image plane and the base line is a pole, the straight line of the image point and the pole is an polar line, and the plane formed by the left polar line and the base line and the right polar line is a polar plane corresponding to the space point.
Parallax correction
For the obtained disparity map, error disparity values existing in the occlusion region and the disparity jump boundary also need to be corrected.
For mismatching of the occlusion region, a mismatching point is found by left-right consistency detection (LRC). The left and right disparity maps are obtained by adopting the algorithm proposed above. The disparity to the corresponding points of the left and right images should be zero, otherwise, it is considered as a mismatching point. The mismatching point detected by the LRC is generally in the shielding area, the real parallax is considered to be similar to the background parallax, so that the detected mismatching point is respectively searched for a first correct matching point in the horizontal left and right directions, the parallax value of the mismatching point in the shielding area is replaced by a smaller parallax value, and for the mismatching of the parallax jump boundary, each pixel point p at the boundary is selected with the left and right pixel points q l And q r Corresponding to the disparity values of D (q l ) And D (q) r ) Then the disparity value for the p-point is determined using the following equation:
Figure BDA0002296180960000071
wherein q r 、q l The first correct matching point found horizontally to the left and right after the detection of the mismatching point corresponds to the disparity value D (q r )、D(q l );D(q r )、D(q l ) Respectively q r 、q l The corresponding parallax value D (p) is the parallax value of the pixel point; c (p, q) is an input matching cost graph to be filtered;
generation of binocular images
The binocular correction of the binocular image is completed, a binocular matching algorithm resisting illumination distortion is provided, the applicability of the proposed algorithm to an actual scene is required to be verified, then a left-eye color image and a generated parallax image are used for drawing a right-eye color image, and finally a 3D image which can be used for naked eye 3D display is synthesized and the display effect is evaluated.
The binocular image acquired by the color and gray scale combined double cameras is converted into a uniform gray scale image after camera calibration and binocular correction. In order to reduce the calculation amount, the original 3200×2400 image is reduced to 800×600, and the binocular gray scale image is matched by using a stereo binocular matching algorithm to generate a parallax image. And drawing a color right image by utilizing the depth information and finally fusing the color right image into a color 3D image. According to the theory of drawing a new image by utilizing parallax images, the parallax images obtained by binocular matching are multiplied by a coefficient b to be used as a remapping relation graph, and the left color images are remapped to obtain right color images with different parallax values, so that 3D images with different stereoscopic impression degrees are synthesized. Fig. 3 is a principle process of mapping and drawing a new right eye image forward using a color left eye image and a left disparity map. As shown in fig. 3, in the process of mapping from left to right, left-visible right-blocked background pixels are mapped to the foreground region of the right image, and can be covered by the subsequent foreground pixel mapping. In the generated right image, the right-visible left-blocked area is mapped by referring to the left image without points, so that holes are left in the areas, and hole filling is needed to complete drawing of the color right-eye image. Since these right-visible left-blocked hole areas are similar to the background area, the holes are filled by taking color information of pixels adjacent to the background area to the right. Fig. 4 is a principle process of drawing a right eye image by using a color left eye image and a right disparity map, and mapping backward. Because each pixel point on the right-eye image to be drawn can be mapped to the pixel point on the left image backwards according to the parallax value, the drawn image has no problem of a cavity. However, as shown in fig. 4, since the pixel points of the area where the left occlusion is supposed to be seen right in the right image are mapped backward to the foreground area of the left image in accordance with the parallax of the background area, it is necessary to modify the parallax value of the area in the right parallax map in advance to be in accordance with the foreground parallax, that is, to expand the parallax map rightward, so that the left occlusion area is mapped backward to the background area. The left and right parallax images are multiplied by a coefficient b to be used as remapped pixel coordinate conversion relation images, and color left eye images are utilized to respectively map forward and backward to draw color right eye images. Experiments show that as b increases, parallax increases, the larger the hole area of the forward mapped drawing image is, the more the foreground area of the backward mapped drawing image is deformed. The backward mapping without hole filling can be used to render the color right-eye image when the coefficient b is small, i.e. when the coefficient b is less than 1.5; when the coefficient B is large, that is, when the coefficient B is greater than 2.5, the forward mapping can be used to draw the right-eye image, and then the hole filling is performed, and the forward mapping does not cause distortion of the foreground region. And synthesizing the color left-eye image and the drawn color right-eye image into one image, guiding the synthesized image into a Zhongxing astronomical MAX7, performing pixel interleaving synthesis by an interleaving algorithm of a mobile phone system, and displaying on a naked eye 3D display screen.
The above examples are preferred embodiments of the present invention, but the embodiments of the present invention are not limited to the above examples, and any other changes, modifications, substitutions, combinations, and simplifications that do not depart from the spirit and principle of the present invention should be made in the equivalent manner, and the embodiments are included in the protection scope of the present invention.

Claims (8)

1. The binocular vision naked eye 3D image generation method is characterized by comprising the following steps of:
acquiring an original binocular image pair with color and gray through an image acquisition device;
the calibration of the image acquisition devices is completed for the acquired original binocular image pair, and the relative positions among the image acquisition devices are obtained; the calibration of the acquisition device is specifically as follows: monocular calibration is respectively carried out on the main camera and the auxiliary camera of the image acquisition device, after respective camera parameters are obtained, double-target calibration is carried out, and the relative position relationship of the main camera and the auxiliary camera is determined; calibrating an image acquisition device by adopting a binocular camera calibration tool, setting a checkerboard template as an image projection screen, defining Z coordinates of points on the checkerboard template in a world coordinate system to be 0, taking the first inner corner point of the upper left corner as an original point, enabling the checkerboard image to occupy a main picture of a lens of the image acquisition device, acquiring checkerboard image pairs from different angles, guiding the acquired checkerboard image pairs into the double-target calibration tool, inputting the actual size of the checkerboard, searching for corner points, when the corner points extracted from the left checkerboard image and the right checkerboard image can be in one-to-one correspondence, calculating camera parameters by the image pairs, selecting distortion models of 2 radial distortion coefficients and 2 tangential distortion coefficients, and obtaining the coordinates of a main camera and a secondary camera, thereby obtaining the relative position relationship of the main camera and the secondary camera of the acquisition device;
performing binocular correction on the original binocular image through the relative positions of the image acquisition devices to obtain corrected binocular images, wherein the corrected binocular images comprise a black-white right-eye image and a color left-eye image;
performing binocular matching on the corrected binocular image by using a binocular matching algorithm to generate a parallax image;
and (3) performing image drawing on the black-white right-eye image by utilizing the parallax image to generate a color right-eye image, synthesizing the color left-eye image and the color right-eye image into one image, and performing pixel interleaving to generate a final binocular image.
2. The binocular vision naked eye 3D image generation method according to claim 1, wherein the image acquisition is performed by an image acquisition device, specifically: shooting by adopting a double-shooting naked eye 3D image acquisition device, firstly opening a main camera, performing parameter setting and prediction, shooting, detecting whether the camera is the main camera, if so, opening a secondary camera, performing parameter setting and prediction, and shooting; if not, the main camera is opened, parameter setting and prediction are carried out, shooting is carried out, and an original binocular image is obtained, namely, the main camera acquires a color image, and the auxiliary camera acquires a black-and-white image.
3. The binocular vision naked eye 3D image generating method according to claim 1, wherein the binocular correction specifically comprises: the acquired original binocular image is subjected to binocular correction by using a boudouet polar correction algorithm, and a corrected rotation matrix R of the main camera is obtained by using camera parameters obtained by camera calibration and the relative position relation of the image acquisition device through stereoRectify function calculation l Correction rotation matrix R ' of ' and secondary camera ' r Projection matrix P of main camera l Projection matrix P of auxiliary camera r And a re-projection matrix Q; obtaining calibration mapping parameters of the left-eye image and the right-eye image by adopting an initUndicator election map function; and obtaining corrected binocular images through the calibration mapping parameters of the left and right binocular images and the remap function mapping.
4. A binocular vision naked eye 3D image generating method according to claim 3, wherein the corrected rotation matrix R of the main camera is obtained by a stereoRectify function calculation l Correction rotation matrix R of' and secondary camera r ' specifically: dividing a rotation matrix R between two cameras into a projection matrix R of a main camera l And projection matrix R of auxiliary camera r The two cameras are rotated by half respectively, and a transformation matrix R is constructed through a translation vector T between the cameras c ,R c =[e 1 T e 2 T e 3 T ]' wherein the pole direction is the same as the translation vector T direction, e 2 Direction and main optical axis direction sum e 1 Orthogonalization, e 3 Direction and e 2 And e 1 Orthogonalization, calculate and get the correction rotation matrix R of the main camera l Correction rotation matrix R of' and secondary camera r ′:
Figure QLYQS_1
5. A binocular vision naked eye 3D image generating method according to claim 3, wherein the binocular matching specifically comprises: correcting error matching points existing in a shielding area and a parallax jump boundary, searching the error matching points through left and right one-time detection aiming at the error matching points of the shielding area, setting the parallax of corresponding points of the left and right images to be 0 for the corrected binocular image, otherwise, considering the error matching points as error matching points, searching first correct matching points in two horizontal directions by the detected error matching points, and selecting a smaller parallax value to replace the parallax value of the shielding area; for the mismatching points existing in the parallax jump boundary, selecting left and right two pixel points for each pixel point at the boundary, wherein the corresponding parallax values of the left and right two pixel points are respectively the sum, and the parallax value of the pixel points is as follows:
Figure QLYQS_2
wherein q r 、q l The first correct matching point found horizontally to the left and right after the detection of the mismatching point corresponds to the disparity value D (q r )、D(q l );D(q r )、D(q l ) Respectively q r 、q l The corresponding parallax value D (p) is the parallax value of the pixel point; c (p, q) is an input matching cost graph to be filtered;
the binocular image acquired by the color and gray level combined double cameras is converted into a uniform gray level image after camera calibration and binocular correction; and matching the binocular gray scale images by using a three-dimensional binocular matching algorithm, namely performing image drawing, and generating parallax images.
6. The binocular vision naked eye 3D image generation method according to claim 1, wherein the image drawing specifically comprises: drawing a color right-eye image by utilizing depth information, multiplying a parallax image obtained by binocular matching by a coefficient b to be used as a remapping relation image, and remapping a color left-eye image to obtain color right-eye images with different parallax values, so as to synthesize 3D images with different stereoscopic impression degrees; multiplying the left parallax image and the right parallax image by a coefficient b to be used as remapped pixel coordinate conversion relation images, and drawing color right eye images by forward mapping and backward mapping respectively by utilizing the color left eye images; and synthesizing the color left-eye image and the drawn color right-eye image into a 3D image, and performing pixel interleaving by an interleaving algorithm to display on a naked eye 3D display screen.
7. The binocular vision naked eye 3D image generation method according to claim 6, wherein the color right eye images of different parallax values include a forward map drawing a new right eye image and a backward map drawing a right eye image.
8. The binocular vision naked eye 3D image generation method according to claim 6, wherein the coefficient b, when the coefficient b is less than 1.5, draws a color right eye image using a backward map without hole filling; when the coefficient b is greater than 2.5, the forward mapping is used to draw the right-eye image, and then the hole filling is performed.
CN201911202342.0A 2019-11-29 2019-11-29 Binocular vision naked eye 3D image generation method Active CN111047709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911202342.0A CN111047709B (en) 2019-11-29 2019-11-29 Binocular vision naked eye 3D image generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911202342.0A CN111047709B (en) 2019-11-29 2019-11-29 Binocular vision naked eye 3D image generation method

Publications (2)

Publication Number Publication Date
CN111047709A CN111047709A (en) 2020-04-21
CN111047709B true CN111047709B (en) 2023-05-05

Family

ID=70234146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911202342.0A Active CN111047709B (en) 2019-11-29 2019-11-29 Binocular vision naked eye 3D image generation method

Country Status (1)

Country Link
CN (1) CN111047709B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111866492A (en) * 2020-06-09 2020-10-30 青岛小鸟看看科技有限公司 Image processing method, device and equipment based on head-mounted display equipment
CN112024167A (en) * 2020-08-07 2020-12-04 湖南中环机械涂装有限公司 Automobile spraying process method and intelligent control system thereof
CN112085777A (en) * 2020-09-22 2020-12-15 上海视天科技有限公司 Six-degree-of-freedom VR glasses
CN116710807A (en) * 2021-03-31 2023-09-05 华为技术有限公司 Range finding camera based on time of flight (TOF) and control method
CN115249214A (en) * 2021-04-28 2022-10-28 华为技术有限公司 Display system and method for binocular distortion correction and vehicle-mounted system
CN113112553B (en) * 2021-05-26 2022-07-29 北京三快在线科技有限公司 Parameter calibration method and device for binocular camera, electronic equipment and storage medium
CN115205451A (en) * 2022-06-23 2022-10-18 未来科技(襄阳)有限公司 Method and device for generating 3D image and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103868460A (en) * 2014-03-13 2014-06-18 桂林电子科技大学 Parallax optimization algorithm-based binocular stereo vision automatic measurement method
WO2018086348A1 (en) * 2016-11-09 2018-05-17 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measurement method
CN110322572A (en) * 2019-06-11 2019-10-11 长江勘测规划设计研究有限责任公司 A kind of underwater culvert tunnel inner wall three dimensional signal space method based on binocular vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103868460A (en) * 2014-03-13 2014-06-18 桂林电子科技大学 Parallax optimization algorithm-based binocular stereo vision automatic measurement method
WO2018086348A1 (en) * 2016-11-09 2018-05-17 人加智能机器人技术(北京)有限公司 Binocular stereo vision system and depth measurement method
CN110322572A (en) * 2019-06-11 2019-10-11 长江勘测规划设计研究有限责任公司 A kind of underwater culvert tunnel inner wall three dimensional signal space method based on binocular vision

Also Published As

Publication number Publication date
CN111047709A (en) 2020-04-21

Similar Documents

Publication Publication Date Title
CN111047709B (en) Binocular vision naked eye 3D image generation method
US11632537B2 (en) Method and apparatus for obtaining binocular panoramic image, and storage medium
US8711204B2 (en) Stereoscopic editing for video production, post-production and display adaptation
CN101902657B (en) Method for generating virtual multi-viewpoint images based on depth image layering
US20230291884A1 (en) Methods for controlling scene, camera and viewing parameters for altering perception of 3d imagery
CN105262958B (en) A kind of the panorama feature splicing system and its method of virtual view
CN111325693B (en) Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image
CN108648264B (en) Underwater scene reconstruction method based on motion recovery and storage medium
CN102665086A (en) Method for obtaining parallax by using region-based local stereo matching
CN106791774A (en) Virtual visual point image generating method based on depth map
CN104751508B (en) The full-automatic of new view is quickly generated and complementing method in the making of 3D three-dimensional films
CN104506872B (en) A kind of method and device of converting plane video into stereoscopic video
CN106408513A (en) Super-resolution reconstruction method of depth map
CN114401391B (en) Virtual viewpoint generation method and device
CN104869386A (en) Virtual viewpoint synthesizing method based on layered processing
CN106028020B (en) A kind of virtual perspective image cavity complementing method based on multi-direction prediction
Bleyer et al. Temporally consistent disparity maps from uncalibrated stereo videos
KR20170025214A (en) Method for Multi-view Depth Map Generation
CN113450274B (en) Self-adaptive viewpoint fusion method and system based on deep learning
GB2585197A (en) Method and system for obtaining depth data
CN111899293B (en) Virtual and real shielding processing method in AR application
CN107610070B (en) Free stereo matching method based on three-camera collection
CN110149508A (en) A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system
CN112200852B (en) Stereo matching method and system for space-time hybrid modulation
Seitner et al. Trifocal system for high-quality inter-camera mapping and virtual view synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant