A Shape-Adjusted Tridimensional Reconstruction of Cultural Heritage Artifacts Using a Miniature Quadrotor
"> Figure 1
<p>(<b>a</b>) The X4-MaG quadrotor equipped with an embedded Raspberry Camera weighs 367 grams and can fly autonomously for 10 min. It can fly closer to the artifact (here, a capital) to capture more details without endangering the artifact itself because of its small size and low weight. (<b>b</b>) The flying test arena in Marseilles with its 17 Vicon cameras can be used to control the quadrotor’s trajectory around the artifact. The ground-truth Vicon system provides the position of the quadrotor used for monitoring purposes, which is necessary to control in a closed loop.</p> "> Figure 2
<p>The X4-MaG robot is fully controlled via the Linux-based controller embedded onboard the Gumstix Overo. The image acquisition is performed with an Odroid W board and its small 5-Mpx Raspberry Camera Module.</p> "> Figure 3
<p>Interconnection between the various systems. The first ground station (PC 1) equipped with MATLAB/Simulink receives orientation and position data from the Vicon motion tracking system. PC 1 is connected to the Gumstix Overo via a WiFi connection and sends setpoints and parameters to the Gumstix high level autopilot via the MATLAB/Simulink toolbox. All of the attitude and position control processes are computed in real time onboard the aerial robot. The second ground station (PC 2) equipped with MicMac is connected to the Odroid W via a second WiFi connection. The Odroid communicates with the Gumstix via a serial connection (UART), which makes it possible to determine the camera’s position whenever a picture is taken. Pictures are sent directly to PC 2 via WiFi (FTP). The two PCs communicate via the local Ethernet network.</p> "> Figure 4
<p>Global overview of the final dense point cloud generation. First a picture of the highly overlapping data acquisition based on 180 photographs was obtained during a circular trajectory. Inputs consisting of the photographs, the camera’s positions and the internal calibration were delivered to the MicMac for processing. MicMac first generated a sparse point cloud, which was used to generate the second trajectory. Further calculations in MicMac’s epipolar dense matching mode resulted in the generation of a dense point cloud. Lastly, the two dense point clouds were combined, giving the final reconstruction.</p> "> Figure 5
<p>3D view of the first circular trajectory, giving the camera’s direction and the robot’s position (<math display="inline"> <semantics> <mrow> <mi>v</mi> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math> cm·s<math display="inline"> <semantics> <msup> <mrow/> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </semantics> </math>).</p> "> Figure 6
<p>Orientation matching between the camera positions estimated by MicMac and given by the Vicon system. The numbers above each point correspond to the picture numbering code. (<b>a</b>) The camera positions when a picture was taken in the Vicon coordinate system; (<b>b</b>) the camera positions when a picture was taken in the MicMac coordinate system; (<b>c</b>) camera positions and orientations once the ICP algorithm had been computed.</p> "> Figure 7
<p>In black, the capital’s shape and its center as computed by the <span class="html-italic">α</span>-shape algorithm [<a href="#B17-remotesensing-08-00858" class="html-bibr">17</a>]. In blue, top view of the sparse 3D point cloud generated on the basis of the photographs taken during the first trajectory. The first and second trajectory’s position setpoints are also shown. The second trajectory was determined by dilating the 2D capital’s shape with a safety distance of 50 cm.</p> "> Figure 8
<p>On the left (<b>a</b>), a 3D view of the second trajectory gives the camera’s direction and the robot’s position (<math display="inline"> <semantics> <mrow> <mi>v</mi> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math> cm·s<math display="inline"> <semantics> <msup> <mrow/> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msup> </semantics> </math>). On the right (<b>b</b>–<b>e</b>), the positions X, Y, Z and the angle Ψ (yaw) versus time are plotted.</p> "> Figure 9
<p>Raw 3D reconstructions of the capital obtained with two image acquisition procedures using a Raspberry Pi camera module in (<b>a</b>–<b>c</b>) and the Nikon D90 in (<b>d</b>). The raw 3D model based on the 180 in-flight pictures taken during the first flight is shown in (<b>a</b>), and that based on the 180 in-flight pictures taken during the second flight is shown in (<b>b</b>). The 360 pictures of both flights were combined and merged, giving the final 3D reconstruction presented in (<b>c</b>). The results of manual acquisition based on the heavy reflex Nikon D90 (shown in (<b>d</b>)) were used as the main reference in the comparisons between procedures. Those screen-shots depict the improvement of the two-step reconstruction. One shall notice the similar density and accuracy with the heavy DSRL (Digital Single-Lens Reflex) manual acquisition, but a large occlusion area was present between the two volutes in the case of this manual acquisition. Moreover in (<b>c</b>), the reconstruction based on two aerial trajectory acquisitions filled this occlusion area, and the final 3D model does not show any large occlusion.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. A Quadrotor for an Automated Photogrammetric 3D Reconstruction System
2.2. Hardware Presentation
- A Vicon motion capture system accurately determines the position and orientation of the robot at a frequency of 500 Hz. The round-trip latency (the time taken to travel through the local Ethernet network) between the computer and the VICON® system is very short (<12 ms).
- A ground station PC 1 runs a Simulink host model in real time using QUARC® software (from Quanser): via WiFi 1, the Simulink-based program sends the 3D position given by the Vicon system to the high level controller onboard the quadrotor. It also monitors the robot’s position and sends setpoints to the embedded quadrotor autopilot.
- A ground station PC 2 equipped with MicMac receives via WiFi 2 the pictures of the capital taken in flight from the Odroid and the orientation files from PC 1. All of the MicMac calculations and point cloud generation processes were carried out in this ground station called PC 2.
- the X4-MaG quadrotor with a Gumstix Overo computer-on-Module that receives its 3D position and setpoints from PC 1 and can compute its trajectory autonomously. An Odroid W board equipped with a Raspberry Camera Module takes pictures of the capital and sends the inflight photographs to PC 2 by WiFi 2. The Odroid board also communicates with the Gumstix thanks to a UART serial connection and enables us to record the camera’s position while the photograph acquisition is being performed.
2.3. MicMac: A Tool for Photogrammetric 3D Reconstruction
- Tie-point extraction (using the/an SIFT algorithm) and image-pair recognition,
- Internal and external calibrations and global orientation of each image based on bundle adjustment,
- Dense image matching, resulting in the final point cloud.
2.4. 3D Reconstruction Procedure
2.4.1. Overview of the Procedure
2.4.2. First Trajectory: A Fast 3D Reconstruction Method
- errors made by MicMac when calculating the first camera position,
- lack of synchronization in the camera triggering,
- errors in the ICP matching algorithm,
- positioning errors made by Vicon (∼mm).
2.4.3. Second Trajectory: Smart Trajectory Depending on the Artifact’s Shape
3. Results
4. Discussion
- to use a miniature camera endowed with an internal stabilizer,
- to reduce the infrastructure cost by further improving the trade-off between accurate timing synchronization and the need to determine the camera’s position and direction accurately.
5. Conclusions
Supplementary Materials
Acknowledgments
Author Contributions
Conflicts of Interest
Abbreviations
APS-C | Advanced Photo System Type-C |
CH | Cultural Heritage |
CMOS | Complementary Metal Oxide Semiconductor |
DSLR | Digital Single Lens Reflex |
FTP | File Transfer Protocol |
GSD | Ground Sample Distance |
ICP | Iterative Closest Point |
IMU | Inertial Measurement Unit |
SIFT | Scale-Invariant Feature Transform |
UART | Universal Asynchronous Receiver Transmitter |
UAV | Unmanned Aerial Vehicle |
References
- Levoy, M.; Pulli, K.; Curless, B.; Rusinkiewicz, S.; Koller, D.; Pereira, L.; Ginzton, M.; Anderson, S.; Davis, J.; Ginsberg, J.; et al. The Digital Michelangelo Project: 3D scanning of large statues. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’00, New Orleans, LA, USA, 23–28 July 2000; pp. 131–144.
- Stanco, F.; Battiato, S.; Gallo, G. Digital Imaging for Cultural Heritage Preservation: Analysis, Restoration, and Reconstruction of Ancient Artworks, 1st ed.; CRC Press, Inc.: Boca Raton, FL, USA, 2011. [Google Scholar]
- Callieri, M.; Scopigno, R.; Sonnino, E. Using 3D digital technologies in the restoration of the Madonna of Pietranico. ERCIM News, October 2011; 48. [Google Scholar]
- Santos, P.; Ritz, M.; Tausch, R.; Schmedt, H.; Monroy, R.; Stefano, A.D.; Posniak, O.; Fuhrmann, C.; Fellner, D.W. CultLab3D—On the verge of 3D mass digitization. In Proceedings of the Eurographics Workshop on Graphics and Cultural Heritage, The Eurographics Association, Aire-la-Ville, Switzerland, 6–8 October 2014; pp. 65–74.
- Gattet, E.; Devogelaere, J.; Raffin, R.; Bergerot, L.; Daniel, M.; Jockey, P.; De Luca, L. A versatile and low-cost 3D acquisition and processing pipeline for collecting mass of archaeological findings on the field. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, XL-5/W4, 299–305. [Google Scholar] [CrossRef]
- Pierrot-Deseilligny, M.; De Luca, L.; Remondino, F. Automated image-based procedures for accurate artifacts 3D modeling and orthoimage generation. Geoinform. FCE CTU 2011, 6, 291–299. [Google Scholar] [CrossRef]
- Toschi, I.; Capra, A.; De Luca, L.; Beraldin, J.A.; Cournoyer, L. On the evaluation of photogrammetric methods for dense 3D surface reconstruction in a metrological context. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 371–378. [Google Scholar] [CrossRef]
- Rosu, A.M.; Assenbaum, M.; De la Torre, Y.; Pierrot-Deseilligny, M. Coastal digital surface model on low contrast images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 40, 307–312. [Google Scholar] [CrossRef]
- Kriegel, S.; Rink, C.; Bodenmüller, T.; Narr, A.; Suppa, M.; Hirzinger, G. Next-best-scan Planning for autonomous 3D modeling. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura-Algarve, Portugal, 7–11 October 2012; pp. 2850–2856.
- Saleri, R.; Pierrot-Deseilligny, M.; Bardiere, E.; Cappellini, V.; Nony, N.; Luca, L.D.; Campi, M. UAV photogrammetry for archaeological survey: The Theaters area of Pompeii. In Proceedings of the Digital Heritage International Congress (DigitalHeritage), Marseille, France, 28 October–1 November 2013; Volume 2, pp. 497–502.
- Michael, N.; Mellinger, D.; Lindsey, Q.; Kumar, V. The grasp multiple micro-UAV testbed. IEEE Robot. Autom. Mag. 2010, 17, 56–65. [Google Scholar] [CrossRef]
- Lupashin, S.; Hehn, M.; Mueller, M.W.; Schoellig, A.P.; Sherback, M.; D’Andrea, R. A platform for aerial robotics research and demonstration: The Flying Machine Arena. Mechatronics 2014, 24, 41–54. [Google Scholar] [CrossRef]
- Manecy, A.; Marchand, N.; Ruffier, F.; Viollet, S. X4-MaG: A low-cost open-source micro-quadrotor and its Linux-based controller. Int. J. Micro Air Veh. 2015, 7, 89–110. [Google Scholar] [CrossRef]
- Manecy, A.; Marchand, N.; Viollet, S. RT-MaG: An open-source SIMULINK Toolbox for Real-Time Robotic Applications. In Proceedings of the IEEE International Conference on Robotics and Biomimetics, Bali, Indonesia, 5–10 December 2014.
- Fraser, C.S. Digital camera self-calibration. ISPRS J. Photogramm. Remote Sens. 1997, 52, 149–159. [Google Scholar] [CrossRef]
- Kjer, H.M.; Wilm, J. Evaluation of Surface Registration Algorithms for PET Motion Correction. Ph.D. Thesis, Technical University of Denmark (DTU), Kongens Lyngby, Denmark, 2010. [Google Scholar]
- Edelsbrunner, H.; Kirkpatrick, D.; Seidel, R. On the shape of a set of points in the plane. IEEE Trans. Inf. Theory 1983, 29, 551–559. [Google Scholar] [CrossRef]
- Margottini, C.; Fidolini, F.; Iadanza, C.; Trigila, A.; Ubelmann, Y. The conservation of the Shahr-e-Zohak archaeological site (central Afghanistan): Geomorphological processes and ecosystem-based mitigation. Geomorphology 2015, 239, 73–90. [Google Scholar] [CrossRef]
- Moutinho, O.F.G. Evaluation of Photogrammetric Solutions for RPAS: Commercial vs. Open Source. Master’s Thesis, University of Porto, Porto, Portugal, 2015. [Google Scholar]
- Aber, J.S.; Marzolff, I.; Ries, J. Small-Format Aerial Photography: Principles, Techniques and Geoscience Applications; Elsevier: Amsterdam, The Netherlands; Oxford, UK, 2010. [Google Scholar]
- Raharijaona, T.; Mignon, P.; Juston, R.; Kerhuel, L.; Viollet, S. HyperCube: A small lensless position sensing device for the tracking of flickering infrared LEDs. Sensors 2015, 15, 16484–16502. [Google Scholar] [CrossRef] [PubMed]
- Peteler, F.; Gattet, E.; Bromblet, P.; Guillon, O.; Vallet, J.M.; De Luca, L. Analyzing the evolution of deterioration patterns: A first step of an image-based approach for comparing multitemporal data sets. In Proceedings of the 2015 Digital Heritage, Granada, Spain, 28 September–2 October 2015; Volume 2, pp. 113–116.
- Dellepiane, M.; Cavarretta, E.; Cignoni, P.; Scopigno, R. Assisted multi-view stereo reconstruction. In Proceedings of the 2013 International Conference on 3D Vision-3DV 2013, Seattle, WA, USA, 29 June–1 July 2013; pp. 318–325.
Specifications | |
---|---|
Mass (g) | 3 |
Resolution (Mpx) | 5 |
Field of view () | |
Image Sensor area (mm) | |
Pixel size (m) | |
Signal to Noise Ratio (SNR) (dB) | 36 |
Mean Error | Max Error | Standard Deviation | |
---|---|---|---|
X (cm) | |||
Y (cm) | |||
Z (cm) | |||
Global (cm) | |||
Ψ () |
Mean Error | Max Error | Standard Deviation | |
---|---|---|---|
Camera position (cm) | |||
Camera direction () |
Mean Error | Max Error | Standard Deviation | |
---|---|---|---|
X (cm) | |||
Y (cm) | |||
Z (cm) | |||
Global (cm) | |||
Ψ |
Automated Quadrotor Embedded Raspberry Pi Camera | Manual Reflex Nikon D90 Operated by a Photographer | |
---|---|---|
Sensor size (mm) | 3.76 × 2.74 | 23.6 × 15.8 |
Resolution (megapixels) | 5 | 12.9 |
Image resolution (pixels) | 2592 × 1944 | 4288 × 2848 |
Pixel-size (μ m) | 1.4 | 5.5 |
Focal length (mm) | 3.6 | 38 |
35-mm equivalent focal length | 36 | 58 |
Automated Quadrotor Embedded Raspberry Pi Camera | Manual Reflex Nikon D90 Operated by a Photographer | |||
---|---|---|---|---|
Distance (m) | 0.7 | 1.2 | ||
GSD (mm) | 0.27 | 0.16 | ||
Graphical Error (mm) | 2.7 | 0.24 | ||
Calib | Traj1 | Traj1 + Traj2 | ||
Residue (px) | 0.37 | 2.22 | 2.77 | 1.22 |
Number of pictures | 11 | 180 | 360 | 37 |
Metric residue (mm·px) | 7.4 | 0.3 | ||
Subsampling (px/line) | 8 | 8 | ||
Point-cloud GSD (mm) | 2.16 | 1.28 | ||
Number of points | 34,000 | 348,000 |
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Louiset, T.; Pamart, A.; Gattet, E.; Raharijaona, T.; De Luca, L.; Ruffier, F. A Shape-Adjusted Tridimensional Reconstruction of Cultural Heritage Artifacts Using a Miniature Quadrotor. Remote Sens. 2016, 8, 858. https://doi.org/10.3390/rs8100858
Louiset T, Pamart A, Gattet E, Raharijaona T, De Luca L, Ruffier F. A Shape-Adjusted Tridimensional Reconstruction of Cultural Heritage Artifacts Using a Miniature Quadrotor. Remote Sensing. 2016; 8(10):858. https://doi.org/10.3390/rs8100858
Chicago/Turabian StyleLouiset, Théo, Anthony Pamart, Eloi Gattet, Thibaut Raharijaona, Livio De Luca, and Franck Ruffier. 2016. "A Shape-Adjusted Tridimensional Reconstruction of Cultural Heritage Artifacts Using a Miniature Quadrotor" Remote Sensing 8, no. 10: 858. https://doi.org/10.3390/rs8100858
APA StyleLouiset, T., Pamart, A., Gattet, E., Raharijaona, T., De Luca, L., & Ruffier, F. (2016). A Shape-Adjusted Tridimensional Reconstruction of Cultural Heritage Artifacts Using a Miniature Quadrotor. Remote Sensing, 8(10), 858. https://doi.org/10.3390/rs8100858