A Novel Approach for Dynamic (4d) Multi-View Stereo System Camera Network Design
<p>Scheme of the proposed views network planning algorithm.</p> "> Figure 2
<p>Example of the sphere with rendered texture used for final validation: (<b>a</b>) RGB noise (random colours), (<b>b</b>) colour triangles, (<b>c</b>) stained glass, (<b>d</b>) random colour line segments, and (<b>e</b>) random black points.</p> "> Figure 3
<p>Example (<b>a</b>) sphere reconstruction from a two-camera scenario; (<b>b</b>) the results of density computation (points/mm<sup>2</sup>); (<b>c</b>) accuracy evaluation (mm).</p> "> Figure 4
<p>Example of a (<b>a</b>) 12-camera setup with the reconstructed sphere; (<b>b</b>) 360-degree scene with camera distribution view with columns; (<b>c</b>) print screen from 3dsmax of the same 360-degree scene.</p> "> Figure 5
<p>The example scene volumes: (<b>a</b>) measurement volume is presented as a black cuboid and permitted volume as segments for cameras to be mounted on as described in this paper. (<b>b</b>) Example of an angular range within which predictions are calculated (45–135 degrees in the polar direction, 0–360 degrees in azimuthal direction).</p> "> Figure 6
<p>Example of (<b>a</b>) discretised measurement volume and (<b>b</b>) normal directions of single points for calculating predictions.</p> "> Figure 7
<p>An example scene with camera setup and discretised measurement volume with the number of observing cameras presented in the colour scale.</p> "> Figure 8
<p>Reconstruction coverage on two camera scenarios with the sphere on the <span class="html-italic">Z</span>-axis at 2-m distances: (<b>a</b>) evaluations, (<b>b</b>) predictions, (<b>c</b>) differences. Red points represent areas where the sphere was reconstructed for evaluations and predictions and for differences where the evaluations differed from predictions. Blue points show otherwise.</p> "> Figure 9
<p>Reconstruction coverage on six-camera scenario with the sphere on the <span class="html-italic">Z</span>-axis at 2-m distance: (<b>a</b>) evaluations, (<b>b</b>) predictions, (<b>c</b>) differences. Red points represent areas where the sphere was reconstructed for evaluations and predictions and for differences where the evaluations differed from predictions. Blue points show otherwise.</p> "> Figure 10
<p>Density evaluation of the reconstructed sphere at the following distances: (<b>a</b>) 2 m, (<b>b</b>) 3 m, (<b>c</b>) 4 m, and (<b>d</b>) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios.</p> "> Figure 11
<p>Density predictions of the reconstructed sphere at the following distances: (<b>a</b>) 2 m, (<b>b</b>) 3 m, (<b>c</b>) 4 m, and (<b>d</b>) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios.</p> "> Figure 12
<p>The relative difference between the predictions and evaluations of the density of the reconstructed sphere at the following distances: (<b>a</b>) 2 m, (<b>b</b>) 3 m, (<b>c</b>) 4 m, and (<b>d</b>) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values higher than 30% are marked in black.</p> "> Figure 13
<p>Evaluation of the accuracy of the reconstructed sphere at the following distances: (<b>a</b>) 2 m, (<b>b</b>) 3 m, (<b>c</b>) 4 m, and (<b>d</b>) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values higher than 0.2 mm for 2 m, 0.25 mm for 3 m, 0.35 mm for 4 m and 0.4 mm for 5 m, respectively, are marked in black.</p> "> Figure 14
<p>Predicted accuracy of the reconstructed sphere at the following distances: (<b>a</b>) 2 m, (<b>b</b>) 3 m, (<b>c</b>) 4 m, and (<b>d</b>) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values that are higher 0.2 mm for 2 m, 0.25 mm for 3 m, 0.35 mm for 4 m and 0.4 mm for 5 m, respectively, are marked in black.</p> "> Figure 15
<p>The relative difference between the accuracy predictions and evaluations of the reconstructed sphere at the following distances: (<b>a</b>) 2 m, (<b>b</b>) 3 m, (<b>c</b>) 4 m, and (<b>d</b>) 5 m. Spheres in different rows were reconstructed from scenarios with different numbers of cameras. In order from bottom to top, these were as follows: two-, three-, four-, six-, nine-, and twelve-camera scenarios. Values higher than 100% are marked as black.</p> "> Figure 16
<p>Average 360-degree scene with (<b>a</b>) 20-camera distribution and 5 reconstructed spheres in the measurement volume with (<b>b</b>) density and (<b>c</b>) accuracy evaluations. Values that are higher than 3.0 in (<b>b</b>) and 0.5 in (<b>c</b>) are marked in black.</p> "> Figure 17
<p>Quasi-optimal 360-degree scene with (<b>a</b>) 20-camera distribution and 5 reconstructed spheres within the measurement volume with (<b>b</b>) density and (<b>c</b>) accuracy evaluations. Values that are higher than 3.0 in (<b>b</b>) and 0.5 in (<b>c</b>) are marked in black.</p> "> Figure 17 Cont.
<p>Quasi-optimal 360-degree scene with (<b>a</b>) 20-camera distribution and 5 reconstructed spheres within the measurement volume with (<b>b</b>) density and (<b>c</b>) accuracy evaluations. Values that are higher than 3.0 in (<b>b</b>) and 0.5 in (<b>c</b>) are marked in black.</p> "> Figure 18
<p>The measurement system used for 3D reconstruction of real data: (<b>a</b>) camera setup with measurement region (black cuboid) and example human body reconstruction, (<b>b</b>) image captured by an example camera using the measurement system.</p> "> Figure 19
<p>3D reconstructions of the human in different poses: (<b>a</b>) pose 1, (<b>b</b>) pose 2, (<b>c</b>) pose 3, (<b>d</b>) pose 4, with corresponding unoccluded sparse point clouds (<b>e</b>–<b>h</b>).</p> "> Figure 20
<p>Sparse point clouds of the subject in different poses: (<b>a</b>) pose 1, (<b>b</b>) pose 2, (<b>c</b>) pose 3, (<b>d</b>) pose 4, with density evaluations presented as colormaps.</p> "> Figure 21
<p>The differences in the point distribution obtained by the (<b>a</b>) OpenMVS—Patch-Based Stereo Method; and (<b>b</b>) Agisoft Metashape—Semi-Global Matching from the same synthetic images.</p> ">
Abstract
:1. Introduction
2. Related Work
3. Materials and Methods
- Set input parameters:
- ○
- Define scene volumes:
- ■
- Measurement volumes;
- ■
- Permitted volumes/areas/lines for sensors positions.
- ○
- Define angular range for which predictions are calculated.
- ○
- Define number of available cameras and their properties.
- ○
- Define target coverage, density, and accuracy for each measurement volume.
- Automatic algorithm:
- ○
- Discretise measurement volumes with points in which predictions will be calculated.
- ○
- Repeat the following procedure until the stop condition is fulfilled:
- ■
- Generate a random camera setup.
- ■
- Calculate coverage, accuracy, and density predictions.
- ■
- Calculate statistics from predictions.
- ■
- Check if required predictions are achieved:
- If yes, and lower camera count was not checked, decrease camera count and repeat generating camera setups.
- If yes and the requirements are fulfilled for lower camera count, select as a quasi-optimal set.
- If no and higher camera count was not checked, increase camera count and repeat generating camera setups.
- If no and higher camera count was already calculated and the requirements were fulfilled, select the best result from a higher camera count.
3.1. Prediction Functions
3.1.1. Dataset Preparation
- Generate random camera setups.
- Randomly place cameras on permitted columns.
- For each, select a random point within the measurement volume to look at.
- Add synthetic objects to the scene.
- Render images from all cameras.
3.1.2. Prediction Parameter Optimisation
3.1.3. Reconstruction Coverage
- The angle between optical camera axes:
- The B/H ratio:
- The magnification difference between the cameras at a given point:
3.1.4. Accuracy Prediction Function
3.1.5. Density Prediction Function
3.2. Camera Network Design Algorithm
3.2.1. Defining Scene Volumes
- Measurement; and
- Permitted.
3.2.2. Camera Properties
3.2.3. Discretising Measurement Volume
3.2.4. Generating Random Camera Setup
3.2.5. Cost Function and Optimal Camera Setup Selection
- The ratio of fully reconstructible points;
- The ratio of reconstructible directions from all points;
- Minimal, average, median, and standard deviation of density and accuracy of reconstructible directions.
4. Results
4.1. Prediction Parameter Optimisation
4.1.1. Reconstruction Coverage
4.1.2. Density
4.1.3. Accuracy
4.2. 360-Degree Scene Camera Setup Generation
- Reconstructible direction ratio for all points: (46.5%, 93.6%);
- Average density: (0.32, 0.81);
- Average accuracy: (0.21, 0.72 mm).
4.3. 360-Degree Real Camera Setup
4.4. Summary
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
Appendix A
Evaluations | Predictions | Relative Difference | |
---|---|---|---|
(a) | |||
(b) | |||
(c) |
Evaluations | Predictions | Relative Difference | |
---|---|---|---|
(a) | |||
(b) | |||
(c) |
References
- Remondino, F.; El-Hakim, S. Image-Based 3D Modelling: A Review. Photogramm. Rec. 2006, 21, 269–291. [Google Scholar] [CrossRef]
- Grussenmeyer, P.; Alby, E.; Landes, T.; Koehl, M.; Guillemin, S.; Hullo, J.F.; Assali, P.; Smigiel, E. Recording Approach of Heritage Sites Based on Merging Point Clouds from High Resolution Photogrammetry and Terrestrial Laser Scanning. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, XXXIX-B5, 553–558. [Google Scholar] [CrossRef] [Green Version]
- Arif, R.; Essa, K. Evolving Techniques of Documentation of a World Heritage Site in Lahore. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, XLII-2/W5, 33–40. [Google Scholar] [CrossRef] [Green Version]
- Abbate, E.; Sammartano, G.; Spanò, A. Prospective upon Multi-Source Urban Scale Data for 3D Documentation and Monitoring of Urban Legacies. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W11, 11–19. [Google Scholar] [CrossRef] [Green Version]
- Cipriani, L.; Bertacchi, S.; Bertacchi, G. An Optimised Workflow for the Interactive Experience with Cultural Heritage through Reality-Based 3D Models: Cases Study in Archaeological and Urban Complexes. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W11, 427–434. [Google Scholar] [CrossRef] [Green Version]
- Heras, V.; Sinchi, E.; Briones, J.; Lupercio, L. Urban Heritage Monitoring, Using Image Processing Techniques and Data Collection with Terrestrial Laser Scanner (Tls), Case Study Cuenca—Ecuador. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W11, 609–613. [Google Scholar] [CrossRef] [Green Version]
- Tobiasz, A.; Markiewicz, J.; Łapiński, S.; Nikel, J.; Kot, P.; Muradov, M. Review of Methods for Documentation, Management, and Sustainability of Cultural Heritage. Case Study: Museum of King Jan III’s Palace at Wilanów. Sustainability 2019, 11, 7046. [Google Scholar] [CrossRef] [Green Version]
- Kot, P.; Markiewicz, J.; Muradov, M.; Lapinski, S.; Shaw, A.; Zawieska, D.; Tobiasz, A.; Al-Shamma’a, A. Combination of the Photogrammetric and Microwave Remote Sensing for Cultural Heritage Documentation and Preservation—Preliminary Results. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, XLIII-B2-2, 1409–1413. [Google Scholar] [CrossRef]
- Stathopoulou, E.-K.; Remondino, F. Multi-View Stereo with Semantic Priors. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2019, XLII-2/W15, 1135–1140. [Google Scholar] [CrossRef] [Green Version]
- Bianco, S.; Ciocca, G.; Marelli, D. Evaluating the Performance of Structure from Motion Pipelines. J. Imaging 2018, 4, 98. [Google Scholar] [CrossRef] [Green Version]
- Saurer, O.; Fraundorfer, F.; Pollefeys, M. Omnitour: Semi-Automatic Generation of Interactive Virtual Tours from Omnidirectional Video. Proc. 3DPVT 2010. Available online: http://www-oldurls.inf.ethz.ch/personal/pomarc/pubs/Saurer3DPVT10.pdf (accessed on 21 November 2021).
- Noh, Z.; Sunar, M.S.; Pan, Z. A Review on Augmented Reality for Virtual Heritage System. In Learning by Playing. Game-Based Education System Design and Development. Edutainment 2009. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; pp. 50–61. [Google Scholar]
- Hamilton, A.; Brown, K. Photogrammetry and Star Wars Battlefrontno Title. Available online: https://www.ea.com/Frostbite/News/Photogrammetry-and-Star-Wars-Battlefront (accessed on 18 November 2021).
- Stylianidis, E.; Remondino, F. 3D Recording, Documentation and Management of Cultural Heritage, 1st ed.; Whittles Publishing: Dunbeath, Scotland, 2017; ISBN 978-1498763035. [Google Scholar]
- Metahero. Available online: https://metahero.io/ (accessed on 16 November 2021).
- Zollhöfer, M.; Stotko, P.; Görlitz, A.; Theobalt, C.; Nießner, M.; Klein, R.; Kolb, A. State of the Art on 3D Reconstruction with RGB-D Cameras. Comput. Graph. Forum 2018, 37, 625–652. [Google Scholar] [CrossRef]
- Karaszewski, M.; Stępień, M.; Sitnik, R. Two-Stage Automated Measurement Process for High-Resolution 3D Digitization of Unknown Objects. Appl. Opt. 2016, 55, 8162–8170. [Google Scholar] [CrossRef] [PubMed]
- Alsadik, B.; Gerke, M.; Vosselman, G. Automated Camera Network Design for 3D Modeling of Cultural Heritage Objects. J. Cult. Herit. 2013, 14, 515–526. [Google Scholar] [CrossRef]
- Fraser, C.S. Network Design. In Close Range Photogrammetry and Machine Vision, 1st ed.; Atkinson, K.B., Ed.; Whittles Publishing: Dunbeath, UK, 1996; pp. 256–281. [Google Scholar]
- Moussa, W. Integration of Digital Photogrammetry and Terrestrial Laser Scanning for Cultural Heritage Data Recording; University of Stuttgart: Stuttgart, Germany, 2006; Volume 35, ISBN 9783769651379. [Google Scholar]
- El-Hakim, S.F.; Lapointe, J.-F.; Whiting, E. Digital Reconstruction and 4D Presentation through Time. In ACM SIGGRAPH 2008 Talks—SIGGRAPH ’08; Association for Computing Machinery (ACM): New York, NY, USA, 2008; p. 1. [Google Scholar] [CrossRef] [Green Version]
- Mahami, H.; Nasirzadeh, F.; Hosseininaveh Ahmadabadian, A.; Esmaeili, F.; Nahavandi, S. Imaging Network Design to Improve the Automated Construction Progress Monitoring Process. Constr. Innov. 2019, 19, 386–404. [Google Scholar] [CrossRef]
- Hosseininaveh, A.; Remondino, F. An Imaging Network Design for UGV-Based 3D Reconstruction of Buildings. Remote Sens. 2021, 13, 1923. [Google Scholar] [CrossRef]
- Kraus, K. Photogrammetry; De Gruyter: Berlin, Germany, 2007; ISBN 978-3-11-019007-6. [Google Scholar]
- Liu, P.; Chen, A.Y.; Huang, Y.-N.; Han, J.-Y.; Lai, J.-S.; Kang, S.-C.; Wu, T.-H.; Wen, M.-C.; Tsai, M.-H. A Review of Rotorcraft Unmanned Aerial Vehicle (UAV) Developments and Applications in Civil Engineering. Smart Struct. Syst. 2014, 13, 1065–1094. [Google Scholar] [CrossRef]
- Schmid, K.; Hirschmüller, H.; Dömel, A.; Grixa, I.; Suppa, M.; Hirzinger, G. View Planning for Multi-View Stereo 3D Reconstruction Using an Autonomous Multicopter. J. Intell. Robot. Syst. 2012, 65, 309–323. [Google Scholar] [CrossRef]
- Vasquez-Gomez, J.I. VPL: A View Planning Library for Automated 3D Reconstruction. In Proceedings of the 2020 International Conference on Mechatronics, Electronics and Automotive Engineering (ICMEAE), Cuernavaca, Mexico, 16–21 November 2020; pp. 13–19. [Google Scholar]
- Vasquez-Gomez, J.I.; Sucar, L.E.; Murrieta-Cid, R.; Lopez-Damian, E. Volumetric Next-Best-View Planning for 3D Object Reconstruction with Positioning Error. Int. J. Adv. Robot. Syst. 2014, 11, 159. [Google Scholar] [CrossRef]
- Koch, T.; Körner, M.; Fraundorfer, F. Automatic and Semantically-Aware 3D UAV Flight Planning for Image-Based 3D Reconstruction. Remote Sens. 2019, 11, 1550. [Google Scholar] [CrossRef] [Green Version]
- Saini, N.; Price, E.; Tallamraju, R.; Enficiaud, R.; Ludwig, R.; Martinovic, I.; Ahmad, A.; Black, M. Markerless Outdoor Human Motion Capture Using Multiple Autonomous Micro Aerial Vehicles. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 823–832. [Google Scholar]
- Ho, C.; Jong, A.; Freeman, H.; Rao, R.; Bonatti, R.; Scherer, S. 3D Human Reconstruction in the Wild with Collaborative Aerial Cameras. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021. [Google Scholar]
- Tallamraju, R.; Price, E.; Ludwig, R.; Karlapalem, K.; Bulthoff, H.H.; Black, M.J.; Ahmad, A. Active Perception Based formation Control for Multiple Aerial Vehicles. IEEE Robot. Autom. Lett. 2019, 4, 4491–4498. [Google Scholar] [CrossRef] [Green Version]
- Xu, L.; Liu, Y.; Cheng, W.; Guo, K.; Zhou, G.; Dai, Q.; Fang, L. Flycap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras. IEEE Trans. Vis. Comput. Graph. 2018, 24, 2284–2297. [Google Scholar] [CrossRef] [PubMed]
- Su, Y.-H.; Huang, K.; Hannaford, B. Multicamera 3D Reconstruction of Dynamic Surgical Cavities: Camera Grouping and Pair Sequencing. In Proceedings of the 2019 International Symposium on Medical Robotics (ISMR), Atlanta, GA, USA, 3–5 April 2019; pp. 1–7. [Google Scholar]
- Rahimian, P.; Kearney, J.K. Optimal Camera Placement for Motion Capture Systems in the Presence of Dynamic Occlusion. In Proceedings of the 21st ACM Symposium on Virtual Reality Software and Technology, Beijing, China, 13 November 2015; pp. 129–138. [Google Scholar]
- Rahimian, P.; Kearney, J.K. Optimal Camera Placement for Motion Capture Systems. IEEE Trans. Vis. Comput. Graph. 2017, 23, 1209–1221. [Google Scholar] [CrossRef] [PubMed]
- Olague, G.; Mohr, R. Optimal Camera Placement for Accurate Reconstruction. Pattern Recognit. 2002, 35, 927–944. [Google Scholar] [CrossRef] [Green Version]
- Chen, X.; Davis, J. An Occlusion Metric for Selecting Robust Camera Configurations. Mach. Vis. Appl. 2008, 19, 217–222. [Google Scholar] [CrossRef] [Green Version]
- Ingber, L. Simulated Annealing: Practice versus Theory. Math. Comput. Model. 1993, 18, 29–57. [Google Scholar] [CrossRef] [Green Version]
- Brooks, S.P.; Morgan, B.J.T. Optimization Using Simulated Annealing. J. R. Stat. Soc. Ser. D 1995, 44, 241. [Google Scholar] [CrossRef]
- Aissaoui, A.; Ouafi, A.; Pudlo, P.; Gillet, C.; Baarir, Z.-E.; Taleb-Ahmed, A. Designing a Camera Placement Assistance System for Human Motion Capture Based on a Guided Genetic Algorithm. Virtual Real. 2018, 22, 13–23. [Google Scholar] [CrossRef]
- Flir Oryx 10gige. Available online: https://www.flir.eu/Products/Oryx-10gige/?Model=ORX-10GS-89S6C-C (accessed on 21 November 2021).
- Autodesk 3dsmax 2020. Available online: https://www.autodesk.eu/Products/3ds-Max/Overview?Term=1-YEAR&Tab=Subscription (accessed on 21 November 2021).
- Cernea, D. Openmvs: Multi-View Stereo Reconstruction Library 2020. Available online: https://cdcseacave.github.io/openMVS (accessed on 21 November 2021).
- Macfarlane, A.G.J.; Postlethwaite, I. The Generalized Nyquist Stability Criterion and Multivariable Root Loci. Int. J. Control 1977, 25, 81–127. [Google Scholar] [CrossRef]
- Autodesk Arbitrary Output Variables. Available online: https://docs.arnoldrenderer.com/Display/A5AF3DSUG/Aovs (accessed on 21 November 2021).
- Gavin, H.P. The Levenberg-Marquardt Algorithm for Nonlinear Least Squares Curve-Fitting Problems; Department of Civil and Environmental Engineering, Duke University: Durham, NC, USA, 2019; pp. 1–19. [Google Scholar]
- Chakraborty, K.; Bhattacharyya, S.; Bag, R.; Hassanien, A.A. Sentiment Analysis on a Set of Movie Reviews Using Deep Learning Techniques. In Social Network Analytics; Elsevier: Amsterdam, The Netherlands, 2019; pp. 127–147. [Google Scholar]
- Shen, S. Accurate Multiple View 3D Reconstruction Using Patch-Based Stereo for Large-Scale Scenes. IEEE Trans. Image Process. 2013, 22, 1901–1914. [Google Scholar] [CrossRef]
- Gentle, J.E. Computational Statistics. In International Encyclopedia of Education; Elsevier: Amsterdam, The Netherlands, 2010; pp. 93–97. [Google Scholar]
- González, Á. Measurement of Areas on a Sphere Using Fibonacci and Latitude–Longitude Lattices. Math. Geosci. 2010, 42, 49–64. [Google Scholar] [CrossRef] [Green Version]
- Alhammadi, M.S.; Al-Mashraqi, A.A.; Alnami, R.H.; Ashqar, N.M.; Alamir, O.H.; Halboub, E.; Reda, R.; Testarelli, L.; Patil, S. Accuracy and Reproducibility of Facial Measurements of Digital Photographs and Wrapped Cone Beam Computed Tomography (CBCT) Photographs. Diagnostics 2021, 11, 757. [Google Scholar] [CrossRef] [PubMed]
Texture Type | Number of Points | The RMSE of Deviations from 500 mm Sphere [mm] |
---|---|---|
RGB noise (random colours) | 11,497,207 | 0.26 |
Colour triangles | 7,307,546 | 0.52 |
Stained glass | 9,722,841 | 0.67 |
Random colour line segments | 10,683,159 | 0.33 |
Random black points | 11,708,933 | 0.33 |
Number of Cameras | Error Predictions [%] |
---|---|
2-camera | 3.75 |
3-camera | 2.86 |
4-camera | 2.79 |
6-camera | 3.9 |
9-camera | 3.8 |
12-camera | 4.1 |
Scene Type | Average Difference (%) | RMS Difference (%) | Median Difference (%) | Standard Deviation (%) |
---|---|---|---|---|
2-camera | 2.49 | 0.019 | 1.97 | 1.84 |
3-camera | 2.83 | 0.039 | 2.14 | 2.26 |
4-camera | 15.97 | 0.019 | 15.05 | 10.68 |
6-camera | 10.72 | 0.118 | 9.19 | 7.19 |
9-camera | 8.79 | 0.089 | 7.28 | 5.94 |
12-camera | 10.73 | 0.099 | 9.81 | 5.61 |
Scene Type | Average Difference (%) | RMSE (%) | Median Difference (%) | Standard Deviation (%) |
---|---|---|---|---|
2-camera | 50.84 | 0.35 | 44.79 | 26.29 |
3-camera | 21.37 | 0.29 | 15.83 | 16.79 |
4-camera | 21.95 | 0.32 | 15.54 | 17.76 |
6-camera | 20.93 | 0.29 | 15.46 | 15.78 |
9-camera | 20.05 | 0.29 | 16.48 | 13.11 |
12-camera | 28.89 | 0.44 | 25.85 | 17.41 |
Value | Average Setup | Quasi-Optimal Setup |
---|---|---|
Coverage | 68.2% | 92.3% |
Average density | ||
Median density | ||
Average accuracy | 0.29 mm | 0.42 mm |
Median accuracy | 0.19 mm | 0.28 mm |
Value | Statistics Type | Average Setup (%) | Quasi-Optimal Setup (%) |
---|---|---|---|
Coverage | Error prediction | 5.38 | 3.36 |
Density | Average difference | 32.0 | 23.17 |
RMSE | 0.38 | 0.21 | |
Median difference | 25.53 | 21.42 | |
Standard deviation | 26.73 | 12.07 | |
Accuracy | Average difference | 36.85 | 34.00 |
RMSE | 0.41 | 0.32 | |
Median difference | 28.9 | 28.67 | |
Standard deviation | 25.38 | 22.10 |
Value | Pose 1 | Pose 2 | Pose 3 | Pose 4 |
---|---|---|---|---|
Average density | ||||
Median density |
Value | Statistics Type | Pose 1 (%) | Pose 2 (%) | Pose 3 (%) | Pose 4 (%) |
---|---|---|---|---|---|
Density | Average difference | 39.8 | 44.2 | 33.8 | 45.0 |
RMSE | 1.43 | 1.71 | 1.39 | 1.63 | |
Median difference | 40.9 | 45.7 | 35.6 | 46.4 | |
Standard deviation | 9.97 | 9.74 | 10.69 | 8.09 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Osiński, P.; Markiewicz, J.; Nowisz, J.; Remiszewski, M.; Rasiński, A.; Sitnik, R. A Novel Approach for Dynamic (4d) Multi-View Stereo System Camera Network Design. Sensors 2022, 22, 1576. https://doi.org/10.3390/s22041576
Osiński P, Markiewicz J, Nowisz J, Remiszewski M, Rasiński A, Sitnik R. A Novel Approach for Dynamic (4d) Multi-View Stereo System Camera Network Design. Sensors. 2022; 22(4):1576. https://doi.org/10.3390/s22041576
Chicago/Turabian StyleOsiński, Piotr, Jakub Markiewicz, Jarosław Nowisz, Michał Remiszewski, Albert Rasiński, and Robert Sitnik. 2022. "A Novel Approach for Dynamic (4d) Multi-View Stereo System Camera Network Design" Sensors 22, no. 4: 1576. https://doi.org/10.3390/s22041576
APA StyleOsiński, P., Markiewicz, J., Nowisz, J., Remiszewski, M., Rasiński, A., & Sitnik, R. (2022). A Novel Approach for Dynamic (4d) Multi-View Stereo System Camera Network Design. Sensors, 22(4), 1576. https://doi.org/10.3390/s22041576