From a Point Cloud to a Simulation Model—Bayesian Segmentation and Entropy Based Uncertainty Estimation for 3D Modelling
<p>Process diagram of generating an environment model on the basis of a point cloud. The process steps are displayed in rectangle shaped boxes. Input and output data are represented by boxes with rounded corners. The segmentation step is highlighted as this step is explained in more detail in [<a href="#B10-entropy-23-00301" class="html-bibr">10</a>].</p> "> Figure 2
<p>Impact of occlusions on a point cloud. (<b>a</b>) The point cloud of the car looks complete at the surface facing the laser scanner. (<b>b</b>) The bottom view of the car reveals the empty interior due to occlusions. The roof of the car is incomplete due to the reflection of the impinging laser beams.</p> "> Figure 3
<p>Locations of data collection. (<b>a</b>) Laser scanner positions for collecting four (red) and six (green) laser scans per tact. We collect laser scans of the assembly line and the surrounding building to generate our data set. The six laser scan set-up is used for evaluation purposes. (<b>b</b>) Concept of taking camera images.</p> "> Figure 4
<p>Summary of the generated point cloud. (<b>a</b>) One example tact of the collected data set. (<b>b</b>) Highly imbalanced class distribution.</p> "> Figure 5
<p>The two test tacts of the automotive factory data set. (<b>a</b>) First test tact displaying a blue vehicle to be manufactured. (<b>b</b>) Second test tact illustrating a white vehicle later in the assembly process.</p> "> Figure 6
<p>Point clouds resulting from different data sources. (<b>a</b>) Four laser scans. (<b>b</b>) Six laser scans. (<b>c</b>) Photogrammetry with wide-angle lens. (<b>d</b>) Photogrammetry with fish-eye lens. (<b>e</b>) Four laser scans and photogrammetry. (<b>f</b>) Six laser scans and photogrammetry.</p> "> Figure 7
<p>Visualization of the combined laser scan and photogrammetric point cloud as mesh.</p> "> Figure 8
<p>One of the test tacts in the automotive factory data set, where certain points are illustrated in black and uncertain points are illustrated in red. The uncertainty is measured using the predictive uncertainty. (<b>a</b>) The uncertainty threshold is set to the mean plus three sigma. (<b>b</b>) The uncertainty threshold is set to the mean plus one sigma.</p> "> Figure 9
<p>The simulation model generated by our modelling approach displayed in the UE4. (<b>a</b>) Simulation model of one test tact. (<b>b</b>) Simulation model when both test tacts are processed simultaneously.</p> ">
Abstract
:1. Introduction
1.1. Greenfield and Brownfield Planning
1.2. Factory Planning and Digitalization
1.3. Contributions of the Paper
- Framework: We describe a comprehensive and methodical modelling approach starting with the digitalization of large-scale industrial environments using laser scans and photogrammetry and ending with the generation of a static environment model.
- Experiment: We evaluate the quality of factory digitalization in terms of the accuracy, completeness and point density of the resulting point cloud. Further, the accuracy of the final environmental model is evaluated. The segmentation model used in this work is presented and evaluated in [10].
- Potential: We provide an estimation of the economic potential of automated factory digitalization as well as simulation model generation for a number of exemplary production plants.
2. Data Modelling
2.1. Point Clouds and Environment Modelling
2.2. Bayesian Neural Networks and Uncertainty Definition
2.3. Data Collection
2.4. Data Pre-Processing
3. Methods
3.1. Point Cloud Segmentation
3.1.1. Notation and Preliminaries
3.1.2. Bayesian Segmentation Network
3.2. Pose Estimation
Algorithm 1: Pose estimation( |
3.3. Simulation Model Generation
4. Results and Analysis
4.1. Evaluation Data Set
4.2. Results
4.2.1. Data Collection and Visualization
4.2.2. Bayesian Segmentation
4.2.3. Uncertainty Estimation
4.2.4. Pose Estimation
5. Discussion and Conclusion
5.1. Discussion
5.2. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
2D | two-dimensional |
3D | three-dimensional |
AML | Automation Markup Language |
BNN | Bayesian Neural Network |
CAD | Computer Aided Design |
DBSCAN | Density-Based Spatial Clustering of Applications with Noise |
DoF | Degrees of Freedom |
ICP | Iterative Closest Points |
IoU | Intersection over Union |
KL | Kullback-Leibler |
OEM | Original Equipment Manufacturer |
OPTICS | Ordering Points To Identify the Clustering Structure |
RANSAC | Random Sample Consensus |
UE4 | Unreal Engine 4 |
VDI | Verein Deutscher Ingenieure (Association of German Engineers) |
References
- VDI-Fachbereich Fabrikplanung und -betrieb. VDI-Richtlinie: VDI 5200, Blatt 1: Fabrikplanung–Planungsvorgehen. 2011. Available online: https://www.vdi.de/richtlinien/details/vdi-5200-blatt-1-fabrikplanung-planungsvorgehen (accessed on 15 January 2021).
- Kuhn, W. Digital Factory—Simulation Enhancing the Product and Production Engineering Process. In Proceedings of the 2006 Winter Simulation Conference, Monterey, CA, USA, 3–6 December 2006; pp. 1899–1906. [Google Scholar]
- Bauernhansl, T.; Ten Hompel, M.; Vogel-Heuser, B. Industrie 4.0 in Produktion, Automatisierung und Logistik: Anwendung-Technologien-Migration; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
- Schenk, M.; Wirth, S.; Müller, E. Factory Planning Manual; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
- Shellshear, E.; Berlin, R.; Carlson, J.S. Maximizing Smart Factory Systems by Incrementally Updating Point Clouds. IEEE Comput. Graph. Appl. 2015, 35, 62–69. [Google Scholar] [CrossRef] [PubMed]
- Luhmann, T. Close range photogrammetry for industrial applications. ISPRS J. Photogramm. Remote. Sens. 2010, 65, 558–569. [Google Scholar] [CrossRef]
- Huang, H.; Li, D.; Zhang, H.; Ascher, U.; Cohen-Or, D. Consolidation of unorganized point clouds for surface reconstruction. ACM Trans. Graph. TOG 2009, 28, 1–7. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Y.; Shen, S.; Hu, Z. Detail preserved surface reconstruction from point cloud. Sensors 2019, 19, 1278. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Petschnigg, C.; Bartscher, S.; Pilz, J. Point Based Deep Learning to Automate Automotive Assembly Simulation Model Generation with Respect to the Digital Factory. In Proceedings of the 2020 9th International Conference on Industrial Technology and Management (ICITM), Oxford, UK, 11–13 February 2020; pp. 96–101. [Google Scholar]
- Petschnigg, C.; Pilz, J. Uncertainty Estimation in Deep Neural Networks for Point Cloud Segmentation in Factory Planning. Modelling 2021, 2, 1–17. [Google Scholar] [CrossRef]
- Vo, A.V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote. Sens. 2015, 104, 88–100. [Google Scholar] [CrossRef]
- Landrieu, L.; Simonovsky, M. Large-scale point cloud semantic segmentation with superpoint graphs. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4558–4567. [Google Scholar]
- Lu, X.; Yao, J.; Tu, J.; Li, K.; Li, L.; Liu, Y. Pairwise Linkage for Point Cloud Segmentation. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci. 2016, 3. [Google Scholar] [CrossRef] [Green Version]
- Ravanbakhsh, S.; Schneider, J.; Poczos, B. Deep Learning with Sets and Point Clouds. arXiv 2016, arXiv:1611.04500. [Google Scholar]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 652–660. [Google Scholar]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. Adv. Neural Inf. Process. Syst. 2017, 5099–5108. [Google Scholar]
- Wu, Z.; Song, S.; Khosla, A.; Yu, F.; Zhang, L.; Tang, X.; Xiao, J. 3D ShapeNets: A deep representation for volumetric shapes. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1912–1920. [Google Scholar]
- Qi, C.R.; Su, H.; Nießner, M.; Dai, A.; Yan, M.; Guibas, L.J. Volumetric and Multi-view CNNs for Object Classification on 3D Data. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 5648–5656. [Google Scholar]
- Zhou, Y.; Tuzel, O. Voxelnet: End-to-end learning for Point Cloud Based 3D Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4490–4499. [Google Scholar]
- Chen, X.; Ma, H.; Wan, J.; Li, B.; Xia, T. Multi-view 3D Object Detection Network for Autonomous Driving. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1907–1915. [Google Scholar]
- Feng, D.; Rosenbaum, L.; Dietmayer, K. Towards Safe Autonomous Driving: Capture Uncertainty in the Deep Neural Network For Lidar 3D Vehicle Detection. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 3266–3273. [Google Scholar]
- Yang, B.; Luo, W.; Urtasun, R. PIXOR: Real-time 3D Object Detection from Point Clouds. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7652–7660. [Google Scholar]
- Xie, Y.; Jiaojiao, T.; Zhu, X. Linking Points With Labels in 3D: A Review of Point Cloud Semantic Segmentation. IEEE Geosci. Remote. Sens. Mag. 2020, 8, 38–59. [Google Scholar] [CrossRef] [Green Version]
- Maas, H.G.; Vosselman, G. Two algorithms for extracting building models from raw laser altimetry data. ISPRS J. Photogramm. Remote. Sens. 1999, 54, 153–163. [Google Scholar] [CrossRef]
- Poux, F.; Billen, R.; Kaspryzk, J.P.; Lefebvre, P.H.; Hallot, P. A Built Heritage Information System Based on Point Cloud Data: HIS-PC. ISPRS Int. J. Geo-Inf. 2020, 9, 588. [Google Scholar] [CrossRef]
- Pu, S.; Vosselman, G. Extracting windows from terrestrial laser scanning. Intl Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2007, 36, 12–14. [Google Scholar]
- Becker, S.; Haala, N. Refinement of building fassades by integrated processing of lidar and image data. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2007, 36, 7–12. [Google Scholar]
- Liu, C.; Wu, J.; Furukawa, Y. Floornet: A unified framework for floorplan reconstruction from 3d scans. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 203–219. [Google Scholar]
- Díaz-Vilariño, L.; Khoshelham, K.; Martínez-Sánchez, J.; Arias, P. 3D modeling of building indoor spaces and closed doors from imagery and point clouds. Sensors 2015, 15, 3491–3512. [Google Scholar] [CrossRef] [Green Version]
- Malihi, S.; Zoej, M.V.; Hahn, M.; Mokhtarzade, M.; Arefi, H. 3D building reconstruction using dense photogrammetric point cloud. Proc. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2016, 3, 71–74. [Google Scholar] [CrossRef] [Green Version]
- Xiao, J.; Gerke, M.; Vosselman, G. Building extraction from oblique airborne imagery based on robust façade detection. ISPRS J. Photogramm. Remote. Sens. 2012, 68, 56–68. [Google Scholar] [CrossRef]
- Avetisyan, A.; Dahnert, M.; Dai, A.; Savva, M.; Chang, A.X.; Nießner, M. Scan2CAD: Learning CAD Model Alignment in RGB-D Scans. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 2614–2623. [Google Scholar]
- Avetisyan, A.; Dai, A.; Nießner, M. End-to-End CAD Model Retrieval and 9DoF Alignment in 3D Scans. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; pp. 2551–2560. [Google Scholar]
- Rusu, R.B.; Bradski, G.; Thibaux, R.; Hsu, J. Fast 3D recognition and pose using the Viewpoint Feature Histogram. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 2155–2162. [Google Scholar]
- Aldoma, A.; Vincze, M.; Blodow, N.; Gossow, D.; Gedikli, S.; Rusu, R.B.; Bradski, G. CAD-model recognition and 6DOF pose estimation using 3D cues. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 585–592. [Google Scholar]
- Graves, A. Practical Variational Inference for Neural Networks. Adv. Neural Inf. Process. Syst. 2011, 2348–2356. [Google Scholar]
- Hastings, W.K. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
- Brooks, S.; Gelman, A.; Jones, G.; Meng, X.L. Handbook of Markov Chain Monte Carlo; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
- Gelfand, A.E.; Smith, A.F. Sampling-Based Approaches to Calculating Marginal Densities. J. Am. Stat. Assoc. 1990, 85, 398–409. [Google Scholar] [CrossRef]
- Duane, S.; Kennedy, A.D.; Pendleton, B.J.; Roweth, D. Hybrid Monte Carlo. Phys. Lett. B 1987, 195, 216–222. [Google Scholar] [CrossRef]
- Rue, H.; Martino, S.; Chopin, N. Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations. J. R. Stat. Soc. Ser. B Stat. Methodol. 2009, 71, 319–392. [Google Scholar] [CrossRef]
- Der Kiureghian, A.; Ditlevsen, O. Aleatory or epistemic? Does it matter? Struct. Saf. 2009, 31, 105–112. [Google Scholar] [CrossRef]
- Gal, Y.; Islam, R.; Ghahramani, Z. Deep Bayesian Active Learning with Image Data. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, Sydney, Australia, 6–11 August 2017; pp. 1183–1192. [Google Scholar]
- Steinbrener, J.; Posch, K.; Pilz, J. Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference. Sensors 2020, 20, 6011. [Google Scholar] [CrossRef] [PubMed]
- Previtali, M.; Scaioni, M.; Barazzetti, L.; Brumana, R. A flexible methodology for outdoor/indoor building reconstruction from occluded point clouds. ISPRS Ann. Photogramm. Remote. Sens. Spat. Inf. Sci. 2014, 2, 119. [Google Scholar] [CrossRef] [Green Version]
- Thomson, C.; Apostolopoulos, G.; Backes, D.; Boehm, J. Mobile Laser Scanning for Indoor Modelling. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci 2013, 5, 66. [Google Scholar] [CrossRef] [Green Version]
- Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
- Pech-Pacheco, J.L.; Cristóbal, G.; Chamorro-Martinez, J.; Fernández-Valdivia, J. Diatom autofocusing in brightfield microscopy: A comparative study. In Proceedings of the 15th International Conference on Pattern Recognition. ICPR-2000, Barcelona, Spain, 3–7 September 2000; Volume 3, pp. 314–317. [Google Scholar]
- Forkuo, E.K.; King, B. Automatic fusion of photogrammetric imagery and laser scanner point clouds. Int. Arch. Photogramm. Remote. Sens. 2004, 35, 921–926. [Google Scholar]
- Westoby, M.J.; Brasington, J.; Glasser, N.F.; Hambrey, M.J.; Reynolds, J.M. ‘Structure-from-Motion’photogrammetry: A low-cost, effective tool for geoscience applications. Geomorphology 2012, 179, 300–314. [Google Scholar] [CrossRef] [Green Version]
- Bishop, C.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Posch, K.; Pilz, J. Correlated Parameters to Accurately Measure Uncertainty in Deep Neural Networks. IEEE Trans. Neural Networks Learn. Syst. 2020, 32, 1037–1051. [Google Scholar] [CrossRef] [Green Version]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic Differentiation in PyTorch. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [CrossRef]. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- MacQueen, J. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Oakland, CA, USA, 27 December 1965–7 January 1966; Volume 1, pp. 281–297. [Google Scholar]
- Bezdek, J.C.; Ehrlich, R.; Full, W. FCM: The fuzzy c-means clustering algorithm. Comput. Geosci. 1984, 10, 191–203. [Google Scholar] [CrossRef]
- Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. Kdd 1996, 96, 226–231. [Google Scholar]
- Ankerst, M.; Breunig, M.M.; Kriegel, H.P.; Sander, J. OPTICS: Ordering points to identify the clustering structure. ACM Sigmod Rec. 1999, 28, 49–60. [Google Scholar] [CrossRef]
- Ng, A.Y.; Jordan, M.I.; Weiss, Y. On Spectral Clustering: Analysis and an algorithm. Adv. Neural Inf. Process. Syst.. 2002, 2, 849–856. [Google Scholar]
- Unreal Engine. Available online: https://www.unrealengine.com/en-US/ (accessed on 13 January 2021).
- FARO Laser Scanner Focus3D X 130 HDR. The Imaging Laser Scanner. Available online: https://faro.app.box.com/s/lz4et2dd6zxk2dwtijmxgvu7yi3m9tve/file/441635448354 (accessed on 25 December 2020).
- Nikon D5500 Technical Specifications. Available online: https://www.nikon.co.uk/en_GB/product/discontinued/digital-cameras/2018/d5500-black#tech_specs (accessed on 25 December 2020).
- Sony Alpha 7R II Technical Specifications. Available online: https://www.sony.com/electronics/interchangeable-lens-cameras/ilce-7rm2/specifications (accessed on 25 December 2020).
- RealityCapture. Available online: https://www.capturingreality.com/ (accessed on 15 January 2021).
- CloudCompare—User Manual. Available online: http://www.cloudcompare.org/doc/qCC/CloudCompare%20v2.6.1%20-%20User%20manual.pdf (accessed on 13 January 2021).
- Blender—User Manual. Available online: https://docs.blender.org/manual/en/dev/ (accessed on 15 January 2021).
- Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
Greenfield Planning | Brownfield Planning |
---|---|
|
|
Stationary Laser Scanning | Mobile Laser Scanning |
---|---|
|
|
Data Source | Accuracy | Completeness | Point Density |
---|---|---|---|
4 Scans | ±5.3 mm | 41% | 89.4 |
6 Scans | ±4.8 mm | 51% | 146.7 |
Photogrammetry only wide-angle | ±5.8 mm | 61% | 854 |
Photogrammetry only fish-eye | ±7.8 mm | 60% | 154.2 |
4 Scans and Photogrammetry | ±6.3 mm | 61% | 804.3 |
6 Scans and Photogrammetry | ±6.2 mm | 64% | 789.9 |
Model | Training Acc. | Test Acc. | Test mIoU |
---|---|---|---|
Original PointNet [15] | 96.56% | 93.37% | 80.03 |
Classical PointNet | 97.66% | 94.23% | 78.08 |
Bayesian PointNet | 98.65% | 95.47% | 82.63 |
Baseline | Predictive | Aleatoric | Epistemic | Variance | Credible Int. | |
---|---|---|---|---|---|---|
Assembly T. 1 | 94.21% | 96.63% | 96.64% | 95.54% | 95.60% | 94.99% |
Assembly T. 2 | 94.78% | 97.41% | 97.47% | 96.22% | 96.22% | 95.63% |
Baseline | Predictive | Aleatoric | Epistemic | Variance | Credible | |
---|---|---|---|---|---|---|
Assembly T. 1 | - | 6.55% | 6.56% | 4.09% | 3.70% | 1.47% |
Assembly T. 2 | - | 7.08% | 7.17% | 4.52% | 4.09% | 1.84% |
Object | # Points | Method | Mistakes | Uncertain | Time |
---|---|---|---|---|---|
Car | 30 555 | k-means | 0.14% | - | 0.11 s |
Car | 30 555 | c-means | 0.14% | 0.24 % | 0.07 s |
Car | 30 555 | DBSCAN | 0% | 1.51% | 3.81 s |
Car | 30 555 | OPTICS | 0% | 1.51% | 45.64 s |
Car | 30 555 | Spectral | 0% | - | 299.23 s |
Hanger | 17 454 | k-means | 1.26% | - | 0.07 s |
Hanger | 17 454 | c-means | 1.27% | 1.06% | 0.04 s |
Hanger | 17 454 | DBSCAN | 5.17% | 0.91% | 1.74 s |
Hanger | 17 454 | OPTICS | 0.91% | 5.18% | 15.78 s |
Hanger | 17 454 | Spectral | 0.99% | - | 62.21 s |
Object | Net | x-coord | y-coord | z-coord | Roll | Pitch | Yaw |
---|---|---|---|---|---|---|---|
Car 1 | F | 5.48 mm | 1.13 mm | 0.45 mm | 0.19° | 0.00° | 0.06° |
Car 2 | F | 9.09 mm | 16.55 mm | 2.74 mm | 0.49° | 0.24° | 0.31° |
Hanger 1 | F | 3.55 mm | 0.82 mm | 6.92 mm | 0.02° | 0.15° | 0.07° |
Hanger 2 | F | 18.03 mm | 1.41 mm | 3.92 mm | 0.26° | 0.25° | 0.01° |
Car 1 | B | 0.61 mm | 0.34 mm | 1.17 mm | 0.13° | 0.02° | 0.03° |
Car 2 | B | 3.49 mm | 7.69 mm | 1.39 mm | 0.17° | 0.28° | 0.22° |
Hanger 1 | B | 7.43 mm | 6.51 mm | 3.53 mm | 0.17° | 0.28° | 0.03° |
Hanger 2 | B | 24.58 mm | 13.17 mm | 7.01 mm | 0.40° | 0.50° | 0.16° |
Car 1 | B+U | 1.83 mm | 6.22 mm | 0.49 mm | 0.21° | 0.07° | 0.13° |
Car 2 | B+U | 6.8 mm | 58.89 mm | 36.59 mm | 0.06° | 0.52° | 0.69° |
Hanger 1 | B+U | 2.04 mm | 0.97 mm | 2.82 mm | 0.04° | 0.06° | 0.04° |
Hanger 2 | B+U | 68.15 mm | 15.92 mm | 70.94 mm | 0.42° | 1.36° | 0.38° |
Attribute | Value | Unit |
---|---|---|
Costs per m2 | 1.5 | € |
Average area of a plant | 950,000 | m2 |
Percentage of scanned area | 60 | % |
Number of plants | 10 | # |
Number of scans per year | 1 | # |
Total cost per year | 8,550,000 | €/year |
Degree of automation | 70 | % |
Savings per year | 5,985,000 | €/year |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Petschnigg, C.; Spitzner, M.; Weitzendorf, L.; Pilz, J. From a Point Cloud to a Simulation Model—Bayesian Segmentation and Entropy Based Uncertainty Estimation for 3D Modelling. Entropy 2021, 23, 301. https://doi.org/10.3390/e23030301
Petschnigg C, Spitzner M, Weitzendorf L, Pilz J. From a Point Cloud to a Simulation Model—Bayesian Segmentation and Entropy Based Uncertainty Estimation for 3D Modelling. Entropy. 2021; 23(3):301. https://doi.org/10.3390/e23030301
Chicago/Turabian StylePetschnigg, Christina, Markus Spitzner, Lucas Weitzendorf, and Jürgen Pilz. 2021. "From a Point Cloud to a Simulation Model—Bayesian Segmentation and Entropy Based Uncertainty Estimation for 3D Modelling" Entropy 23, no. 3: 301. https://doi.org/10.3390/e23030301
APA StylePetschnigg, C., Spitzner, M., Weitzendorf, L., & Pilz, J. (2021). From a Point Cloud to a Simulation Model—Bayesian Segmentation and Entropy Based Uncertainty Estimation for 3D Modelling. Entropy, 23(3), 301. https://doi.org/10.3390/e23030301