Abstract
In this paper, a high-precision integral imaging (II) pickup system for the real scene is proposed. The dual-projection optical pickup method is utilized to obtain the elemental image array for the II display. The proposed method is robust to the position deviations of the projectors and camera. The calibration of the camera is simplified. Furthermore, the pickup of the II is not limited by the complex optical and mechanical structures. Experimental results show that the proposed system can generate the continuous and tunable parallaxes. With the proposed II pickup and display system, the high-quality 3D images for the real scene can be reconstructed efficiently.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Integral imaging (II) as an attractive three-dimensional (3D) technology can reconstruct the autostereoscopic 3D images without glasses and provide both horizontal and vertical parallaxes with continuous views [1–3]. Basically, the conventional II system consists of the pickup and display sections. In the pickup section, however, there are still some problems such as the limitations of 3D resolution, parallax range, and scene size, which delay the practical application of the II. In the past decades, many researchers have focused on solving these problems, and many technologies have been proposed, including optical pickup method, computer graphic technology, and depth camera based technology.
The conventional optical pickup method using a micro-lens array (MLA) is limited by the scene size, unnecessary beams, and aberrations [4–7]. It is difficult to capture the 3D information on a real and large-sized 3D scene in practice. The quality of the reconstructed 3D image is reduced because of the limitations imposed by the manufacturing technique used for the MLA. Some researchers replace the MLA with a certain camera array in the pickup part to collect the full-color and high-resolution 3D information of a large-sized real 3D scene [8–11]. Then the elemental image array (EIA) is generated with the 3D information by pixel mapping algorithms [11, 12]. Although, those methods can be applied for the large-sized real scene, they require some complex optical and mechanical structures. Furthermore, the calibration of the camera array is a difficulty when the camera array contains a large number of cameras [9, 11].
In recent years, with the development of the computer graphic technology, computer-generated integral imaging (CGII) has been proposed [12–16]. In the CGII, however, the limitations of computer graphic technology itself make the capturing of 3D information on real scenes difficult. For the simplification of the II pickup, a great contribution has been made by Chungbuk National University to collect the EIA of the real scene with a depth camera [17]. This method simplifies the pickup process, but it is limited by the accuracy and resolution of the depth camera. The occlusion and the holes in the depth map degrade the quality of the EIA seriously. Some researches combined the optical and computer-generated methods. The dual-camera enabled II pickup system has been proposed [18]. Obviously, the generated parallaxes are limited by the two-view stereoscopic camera and stereo matching algorithms [19].
In this paper, we propose a system to achieve a high-precision II pickup for the real scene. The dual-projection optical pickup (DPOP) method is used to capture the 3D information with no need of complex calibration. This method obtains the more complete reconstructed 3D shape. Then sub-images are generated based on the color texture and depth data of the real scene. The EIA for the II display are generated by interleaving the obtained sub-images. Experimental results verify the usefulness of the proposed system.
2 Principle of the Proposed II Pickup System
We achieve the high-precision II pickup system for real scene based on the DPOP method. In the proposed system, a more complete 3D shape of the real scene can be obtained. The continuous and tunable parallaxes, the sub-images, are extracted from the color texture and depth data. Interleaving the sub-images, the high-precision EIA can be obtained for the II display.
The architecture of the proposed system is composed of four parts: the input part including the parameters of the DPOP method and the II display input, the reconstruction part including the reconstruction of the more complete 3D shape and the capture of the corresponding color texture, the EIA generation part including the generation of the sub-images and the pixel mapping for the EIA, and the 3D display part showing the EIA through the MLA for the viewers.
2.1 Comparison of the Conventional and Proposed II Pickup System
In the conventional II pickup system, as shown in Fig. 1(a) and (b), the EIA is generated by the MLA or the camera array. In Fig. 1(a), the real scene is captured as elemental images through each micro-lens in the MLA. The size of the scene and the accuracy of the EIA are limited by the parameters of the MLA. The cross-talk effect between neighboring micro-lenses also reduces the quality of reconstructed 3D images [20]. In Fig. 1(b), the camera array is arranged to pick up the real scene. But the camera array needs accurate calibration, operation, and synchronism. Besides, the latency and bandwidth are both the limitations.
In our proposed system, as shown in Fig. 1(c), we utilize two digital light processing projectors (DLPs) to project the structured light [21–24]. The utilization of the dual-projection, DLP1 and DLP2, avoids the error accumulation with the occlusion of a single DLP [25]. In the proposed DPOP method, DLP1 and DLP2 are used for projecting the grating patterns on the real scene, and the deformed patterns are captured by the charge coupled device (CCD). The 3D shape of the real scene is modulated in the deformed patterns. With the phase unwrapping algorithms and the mapping algorithms, the 3D shape can be extracted from the deformed patterns [26–29]. But the phase unwrapping algorithms are not suitable for the blind areas, which can cause the error accumulation in the reconstructed 3D shape. So the reconstructed 3D shape may be not complete based on single DLP projection [25]. In the proposed DPOP method, two DLPs project grating patterns from different directions, and the 3D shape is reconstructed with each of the DLPs, respectively. These 3D shapes have some imperfection, and we introduce the fusion and stitching algorithm to obtain the more complete 3D shape.
2.2 Reconstruction of 3D Shape for Real Scene by DPOP Method
In this paper, the DPOP method is proposed to obtain the complete 3D shape of the real scene. Two DLPs are utilized to project grating patterns to avoid the blind areas of single DLP. The reconstructed 3D shapes can be fused together completely.
As shown in Fig. 2, the DLP1 and DLP2 are arranged in the front of the real scene and project N grating patterns. N grating patterns are arranged by a sinusoidal rule. There is an equal 2π/N phase-shifting between the adjacent grating patterns. The CCD captures the j-th deformed patterns from the DLP i (i = 1, 2), and the intensity of the captured deformed pattern is denoted as I i (x, y, j):
where j = 1, 2,…, and N, and x, y are the pixel coordinates in the captured deformed patterns, R i (x, y) is the surface reflectance of the real scene, A i (x, y) represents the background light intensity, B i (x, y) is the fringe contrast, φ i (x, y) indicates the deformed phase modified by the real scene, and σ j is the phase-shifting of the j-th deformed pattern.
As the structured light illumination by single DLP [22, 29], the truncated phase φ′ i (x, y) of the deformed phase φ i (x, y) can be deduced as:
According to the inverse trigonometric functions, φ′ i (x, y) has a value in [−π, π). For the continuous phase distributions, the truncated phase φ′ i (x, y) needs to be unwrapped by the phase unwrapping algorithm [22, 28], and the unwrapped phase is denoted as Ψ i (x, y). Then the phase-changing ΔΨ i (x, y) between the real scene and the reference plane can be calculated. And according to the phase-to-height mapping algorithm, the height Δh i (x, y) of the captured real scene can be calculated as follows:
where a i (x, y), b i (x, y) and c i (x, y) are the mapping parameters, which can be acquired by plane calibrations. After dealing with the deformed patterns information, we can get the height and contour information of the real scene. The height Δh i (x, y) obtained by the single DLP i maybe not complete because of the blind areas. However, the obtained height Δh i (x, y) is simply determined by the real scene, not the measurement system. In other words, the height Δh i (x, y) is independent of the parameters in DPOP method. So the different height Δh i (x, y) can be fused and stitched together to obtain the more complete 3D shape. The fused height ΔH(x, y) can be obtained as:
where Ω i represents the pixel regions in which the reconstructed height Δh i (x, y) has no accumulate errors in the phase unwrapping algorithm, and Ω represents the whole pixel region.
2.3 Generation Method of Sub-images and EIA for II Display
For the high-precision EIA, we generate the sub-images firstly. The sub-image, which is a collection of the pixels at the same position in every elemental image, has the orthographic projection geometry. In the II display, the sub-images represent a series of directional images. As shown in Fig. 3, the real scene imaged on the EIA plane by the MLA. The parallel rays with the same directional angle θ can be extracted to form an orthographic sub-image [16, 30]. The Fig. 3(a) and (b) show the generation geometries of the sub-images and EIAs with the different central depth planes. The pixel information of the sub-images is extracted from the color texture. And the pixel coordinates are decided by the central depth plane and the depth data. The depth data ΔD(x, y) can be transformed from the fused height ΔH(x, y):
where the W and H are the real width and height of the real scene, and the R w × R h is the resolution of captured deformed pattern. The Eq. 6 converts the height ΔH(x, y) from the true height to the pixel coordinates, as the depth data ΔD(x, y). In the sub-image, as shown in Fig. 3, the pixel information K is mapping to the pixel coordinate G, and the pixel shifting between K and G is denoted as Δq. According to the geometry shown in Fig. 3, the sub-image I θ (x, y) for the projecting angle θ can be deduced as:
where T(x, y) is the pixel information of the color texture, and Δq x and Δq y are the components along the x and y axes of the pixel shifting Δq, respectively. The pixel shifting Δq is depend on the depth data and central depth plane, and can be calculated as:
where d c , described by pixel coordinate, is the distance between the zero plane of the depth data (z = 0) and the central depth plane.
In the proposed system, the projecting angle θ can be deduced by the parameters of the II display. As shown in Fig. 3, the gap between the MLA and the EIA is g and the interval between the elemental image’s pixel and the centre is Δr. The projecting angle θ can be decided by:
For the different intervals, the projecting angle θ is different. So the parallaxes are continuous and tunable.
With the sub-images obtained for all projecting angle, the EIA can be generated by interleaving the sub-images based on the viewpoint vector rendering method efficiently [30].
3 Experiments and Results
In our experiments, we use two projectors (CB-X24) as the DLP1 and the DLP2 to project N grating patterns. In our experiments, N = 4, so the phase-shifting σ j = 2π/4. The CCD (GM501-H) captures the deformed patterns in 640 × 480 pixels. The generated EIA is displayed on the II pad [15]. The experimental setup is shown in Fig. 4.
The proposed II pickup system is configured with the specification in Table 1. The distance between the CCD and the DLPs is 0.331 m, and the center depth plane is located at the d c = 0 pixel plane and d c = 130 pixel plane.
In our experiments, a “man head” is used as the real scene. We reconstruct the 3D shapes of the head with the deformed patterns (Fig. 5(a) and (c)) by each of DLPs, respectively. As shown in Fig. 5(d) and (e), the 3D shapes are not complete by single DLP. The Fig. 5(e) shows the fused 3D shape in our proposed system. From the profile shown in Fig. 5(d), we can see that the fused 3D shape is complete and no error accumulation.
We generate the sub-images by the proposed method from the depth data and color texture of the head as shown in Fig. 6(a)–(c). The projecting angle θ is continuous and tunable. The EIAs generated with the different central depth planes are shown in Figs. 6(d) and (e).
When the viewer moves in the front of the II display, the reconstructed images from different positions are captured, as shown in Fig. 7.
4 Conclusion
A dual-projection based high-precision II pickup system for the real scene is proposed in this paper. The proposed system takes advantage of the high accuracy of the structured light illumination, and simplifies the optical and mechanical structure for the capturing of real scene. With the high-precision depth data and color texture, the continuous and tunable parallaxes are generated in the experiments. By the proposed II pickup system, the high-quality EIA can be generated efficiently.
References
Lippmann, G.: La photographie integrale. C. R. Acad. Sci. 146, 446–451 (1908)
Hong, J., Kim, Y., Choi, H.J., Hahn, J., Park, J.H., Kim, H., Min, S.W., Chen, N., Lee, B.: Three-dimensional display technologies of recent interest: principles, status, and issues. Appl. Opt. 50(34), H87–H115 (2011)
Xiao, X., Javidi, B., Martinez-Corral, M., Stern, A.: Advances in three-dimensional integral imaging: sensing, display, and applications [Invited]. Appl. Opt. 52(4), 546–560 (2013)
Okano, F., Hoshino, H., Arai, J., Yuyama, I.: Real-time pickup method for a three-dimensional image based on Integral Photography. Appl. Opt. 36(7), 1598–1603 (1997)
Jang, J.S., Javidi, B.: Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics. Opt. Lett. 27(5), 324–326 (2002)
Yoo, H.: Axially moving a lenslet array for high-resolution 3D images in computational integral imaging. Opt. Express 21(7), 8873–8878 (2013)
Arai, J., Okui, M., Yamashita, T., Okano, F.: Integral three-dimensional television using a 2000-scanning-line video system. Appl. Opt. 45(8), 1704–1712 (2006)
Xu, Y., Wang, X.R., Sun, Y., Zhang, J.Q.: Homogeneous light field model for interactive control of viewing parameters of integral imaging displays. Opt. Express 20(13), 14137–14151 (2012)
Sang, X.Z., Fan, F.C., Jiang, C.C., Choi, S., Dou, W.H., Yu, C., Xu, D.: Demonstration of a large-size realtime full-color three-dimensional display. Opt. Lett. 34(24), 3803–3805 (2009)
Moon, I., Javidi, B.: Three-dimensional recognition of photon-starved events using computational integral imaging and statistical sampling. Opt. Lett. 34(6), 731–733 (2009)
Navarro, H., Dorado, A., Saavedra, G., Llavador, A., Martínez-Corral, M., Javidi, B.: Is it worth using an array of cameras to capture the spatio-angular information of a 3D scene or is it enough with just two? In: Proceedings of SPIE vol. 8384, pp. 838406–838406-7 (2012)
Liao, H., Iwahara, M., Hata, N., Dohi, T.: High-quality integral videography using a multiprojector. Opt. Express 12(6), 1067–1076 (2004)
Igarashi, Y., Murata, H., Ueda, M.: 3D display system using a computer generated integral photography. Jpn. J. Appl. Phys. 17(9), 1683–1684 (1978)
Jang, Y.H., Park, C., Jung, J.S., Park, J.H., Kim, N., Ha, J.S., Yoo, K.H.: Integral imaging pickup method of bio-medical data using GPU and Octree. J. Korea Contents Assoc. 10(6), 1–9 (2010)
Ji, C.C., Luo, C.G., Deng, H., Li, D.H., Wang, Q.H.: Tilted elemental image array generation method for moiré-reduced computer generated integral imaging display. Opt. Express 21(17), 19816–19824 (2013)
Kwon, K.C., Park, C., Erdenebat, M.U., Jeong, J.S., Choi, J.H., Kim, N., Park, J.H., Lim, Y.T., Yoo, K.H.: High speed image space parallel processing for computer-generated integral imaging system. Opt. Express 20(2), 732–740 (2012)
Li, G., Kwon, K.C., Shin, G.H., Jeong, J.S., Yoo, K.H., Kim, N.: Simplified integral imaging pickup method for real objects using a depth camera. J. Opt. Soc. Korea 16(4), 381–385 (2012)
Jiao, X.X., Zhao, X., Yang, Y., Fang, Z.L., Yuan, X.C.: Dual-camera enabled real-time three-dimensional integral imaging pick-up and display. Opt. Express 20(25), 27304–27311 (2012)
Cooperation Stereo Vision. http://www.cs.cmu.edu/clz/stereo.html
Kavehvash, Z., Mehrany, K., Bagheri, S.: Optimization of the lens-array structure for performance improvement of integral imaging. Opt. Lett. 36(20), 3993–3995 (2011)
Srinivasan, V., Liu, H.C., Halioua, M.: Automated phase-measuring profilometry of 3-D diffuse objects. Appl. Opt. 23(18), 3105–3108 (1984)
Kim, E.H., Hahn, J., Kim, H., Lee, B.: Profilometry without phase unwrapping using multi-frequency and four-step phase-shift sinusoidal fringe projection. Opt. Express 17(10), 7818–7830 (2009)
Kim, J., Jung, J.H., Jang, C., Lee, B.: Real-time capturing and 3D visualization method based on integral imaging. Opt. Express 21(16), 18742–18753 (2013)
Schaffer, M., Grosse, M., Kowarschik, R.: High-speed pattern projection for three-dimensional shape measurement using laser speckles. Appl. Opt. 49(18), 3622–3629 (2010)
Su, L., Su, X., Li, W., Xiang, L.: Application of modulation measurement profilometry to objects with surface holes. Appl. Opt. 38(7), 1153–1158 (1999)
Ou, P., Li, B., Wang, Y., Zhang, S.: Flexible real-time natural 2D color and 3D shape measurement. Opt. Express 21(14), 16736–16741 (2013)
Zhang, S., Van Der Weide, D., Oliver, J.: Superfast phase-shifting method for 3-D shape measurement. Opt. Express 18(9), 9684–9689 (2010)
Liu, K., Wang, Y., Lau, D.L., Hao, Q., Hassebrook, L.G.: Dual-frequency pattern scheme for high-speed 3-D shape measurement. Opt. Express 18(5), 5229–5244 (2010)
Xu, Y., Jia, S., Bao, Q., Chen, H., Yang, J.: Recovery of absolute height from wrapped phase maps for fringe projection profilometry. Opt. Express 22(14), 16819–16828 (2014)
Park, K.S., Min, S.W., Cho, Y.: Viewpoint vector rendering for efficient elemental image generation. IEICE Trans. Inf. Syst. E 90-D, 233–241 (2007)
Kang, H.H., Lee, J.H., Kim, E.S.: Enhanced compression rate of integral images by using motion-compensated residual images in three-dimensional integral-imaging. Opt. Express 20(5), 5440–5459 (2012)
Lee, J.J., Shin, D.H., Lee, B.G.: Simple correction method of distorted elemental images using surface markers on lenslet array for computational integral imaging reconstruction. Opt. Express 17(20), 18026–18037 (2009)
Acknowledgment
The work is supported by the NSFC under Grant Nos. 61225022 and 61320106015, the “973” Program under Grant No. 2013CB328802, and the “863” Program under Grant No. 2015AA015902.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Xiong, ZL., Wang, QH., Deng, H., Xing, Y. (2015). Dual-Projection Based High-Precision Integral Imaging Pickup System. In: Zhang, YJ. (eds) Image and Graphics. ICIG 2015. Lecture Notes in Computer Science(), vol 9218. Springer, Cham. https://doi.org/10.1007/978-3-319-21963-9_19
Download citation
DOI: https://doi.org/10.1007/978-3-319-21963-9_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-21962-2
Online ISBN: 978-3-319-21963-9
eBook Packages: Computer ScienceComputer Science (R0)