Semantic Interpretation of Mobile Laser Scanner Point Clouds in Indoor Scenes Using Trajectories
<p>From left to right: Our prototype backpack system (ITC backpack), NavVis Trolley, Zeb-1 and Zeb-Revo.</p> "> Figure 2
<p>The trajectory of various mobile laser scanners that are colored by the time. From left to right: ITC Backpack, NavVis Trolley, Zeb-1 and Zeb-Revo.</p> "> Figure 3
<p>(<b>a</b>) The perspective view and (<b>b</b>) the top view of the reflection situation. (<b>c</b>) The purple line is the incident line from the sensor to the glass and then to the reflected point on the other side of the glass surface. The brown line shows the specularly reflected line from the glass surface to the exact position of the object. (<b>d</b>) Shows the correct situation after the back projection of the ghost wall. The white points are corrected wall.</p> "> Figure 4
<p>(<b>a</b>) In complex buildings, part of one building level can be extended vertically to other levels. To separate levels, a height histogram approach is not working on this type of buildings. (<b>b</b>) Segmentation of the trajectory to horizontal and sloped segments. (<b>c</b>) After correction of segmented trajectory, for example, the purple and orange segments in the first floor are merged into one segment. (<b>d</b>) The separation of first (blue) and third levels (red) using the trajectory. The intermediate floor is removed for better visualization. (<b>e</b>) The stairs are extracted using the trajectory on stairs. Each color belongs to a segment of stair’s trajectory.</p> "> Figure 5
<p>(<b>a</b>) The segments of surfaces patches, (<b>b</b>) permanent structures, the wall in green, the ceiling in red and the floor is in orange color. The solid black circle shows the top part of the book shelf that is mislabeled as the ceiling. Hence, the bookshelf (yellow rectangle) is mislabeled as wall. Likewise, near the floor some horizontal segments are mislabeled (circles with dashed line). (<b>c</b>) After checking the intersection of vertical projection for each pair of surfaces and correction, the result is shown as the wall (green), the ceiling (red) and the floor (orange). The blue object is a clutter. Angle threshold is α = 50 degrees. Notice that the dormer and attached walls are labeled correctly in our method. The data is obtained from Mura et al. [<a href="#B10-remotesensing-10-01754" class="html-bibr">10</a>].</p> "> Figure 6
<p>(<b>a</b>) Shows the permanent structure, ceiling (red), wall (cyan), blue (slanted walls) and green (floor). The angle threshold is 50 degrees. (<b>b</b>) Shows the permanent structure, with the same angle threshold (α = 50), but the slanted walls algorithm is off. Consequently, supporting walls are not detected (dashed circle). Only walls (cyan color) that are connected to the ceiling are correctly detected. (<b>c</b>) The angle threshold is set to 40 degrees, and slanted walls are labeled as the ceiling.</p> "> Figure 7
<p>An incident voxel on the wall surface will be assigned the label occupied, occluded or open if the measured point <span class="html-italic">p1</span> is in the front, on the surface or behind the wall surface respectively.</p> "> Figure 8
<p>(<b>a</b>) The classification of walls (orange), opening (light blue) and clutter (blue) in the fire truck hall of Fire Brigade building. The misclassified walls (red dotted area) cause the occlusion test algorithm to add the excess glass walls (light blue in (<b>b</b>)) in the middle of space that unnecessarily divides the space to several partitions. Figure (<b>c</b>) shows the correct classification of walls after identifying and removing false openings.</p> "> Figure 9
<p>(<b>a</b>,<b>b</b>) show the top view of the partitions in various colors and the trajectory in black. The white places between the spaces are occupied places (e.g., furniture and walls). The dotted circles show the invalid partitions that are removed, because there is no intersection with the trajectory. The orange large partition is also an invalid space but is not removed, because it has connection with the interior space and with the trajectory; (<b>d</b>) The perspective view of the spaces and the trajectory; (<b>c</b>,<b>e</b>) Show the bottom view of the spaces. The carvings of furniture and occupied places are visible inside the partitions.</p> "> Figure 10
<p>A Zeb-Revo trajectory (blue) crosses an open door in the left and a semi-open door in the right. The middle door, that is closed, is not traversed by the trajectory thus cannot be detected by our algorithm. The yellow boxes show the door center candidates and top of the door voxels. The circles show the search radius from the door center candidate to the trajectory.</p> "> Figure 11
<p>Results of datasets of <a href="#remotesensing-10-01754-t001" class="html-table">Table 1</a>. From top to bottom: Fire Brigade building level 2, level 1, TU Braunschweig, Cadastre building and TU Delft Architecture building. In the second column, detected walls (orange), floor (yellow), doors (red) and openings (blue) are shown.</p> "> Figure 12
<p>First level of Fire Brigade building. The amount of clutter and very large glass walls makes the process of wall detection challenging. The ceiling has two different heights and there is a lot of clutter below the ceiling. The colors represent the segments.</p> "> Figure 13
<p>Result of wall detection, using the adjacency of segments. (<b>a</b>) The effect of minimum length parameter for intersection lines between adjacent segments (door leaves and the ceiling) on the result of wall detection is shown. In (<b>b</b>) the minimum length is 25 cm, so small intersections are discarded and consequently door leaves are not misclassified as wall.</p> "> Figure 14
<p>The robustness of our algorithms for the buildings with many glass surfaces. (<b>a</b>) The orange hall in the TU Delft Architecture Building, (<b>b</b>) the point clouds and (<b>c</b>) the classified walls and glasses.</p> "> Figure 15
<p>(<b>a</b>) Color point clouds and (<b>b</b>) segmented point clouds. Our adjacency graph algorithm is limited when a clutter in the ceiling is connected to the walls and constructs a vertical surface attached to the ceiling and neighboring walls (red area in the images). In (<b>c</b>) walls are yellow, and the red area is misclassified as wall. The dataset belongs to Mura et al. [<a href="#B10-remotesensing-10-01754" class="html-bibr">10</a>].</p> "> Figure 16
<p>Door detection method in an area with a low ceiling. (<b>a</b>) Shows the detected walls (grey), false walls (red), missed walls (green) and detected doors (blue). Most of the doors crossed by the trajectory are detected. (<b>b</b>) The side view shows the trajectory and low ceiling (light blue). The purple dots are points above the trajectory that are wrongly detected as a door. (<b>c</b>) Is the top view of (<b>b</b>).</p> "> Figure 17
<p>The cadaster building. (<b>a</b>) The top and (<b>b</b>) side view of the point cloud of one of the floors. The glass façade has slanted surface and artefacts that pose a problem for detecting them by surface growing. The supporting walls connected to the floor are not detected by our algorithm. (<b>c</b>) The front view of the cadastre building. Slanted glass surfaces are visible in the façade.</p> ">
Abstract
:1. Introduction
2. Related Work
3. Data Collection and Preprocessing
3.1. Point Clouds and the Trajectory
3.2. Identifying the Artefacts from Reflective Surfaces
- Objects behind the glass if the laser beam is transmitted. Since almost 92% of the light is transmitted through the glass, a lot of objects behind a glass surface are measured through the glass. However, these points are less reliable than a directly measured object.
- Objects in the front of a glass surface which are reflected in the glass. In this case, the glass is acting like a mirror or a specular surface. Therefore, in the point clouds a mirrored object will appear exactly at the same distance from the glass and with the same size as the real object. We call these virtual objects “ghost walls”. They are problematic because it could happen that the whole room is mirrored to the other side of the specular surface. This artefact occurs when the laser scanner is moving in a specific angle toward the glass surface, naturally the same angle that objects could be seen in the glass.
- Objects that represent the glass surface itself. If the laser beam is almost perpendicular or there is dust and other features on the glass, then part of the glass surface will be present in the point cloud.
3.3. Segmentation and Generalization
4. Permanent Structure Detection
4.1. Separation of Building Levels and Stairs
4.2. Wall Detection
- E obtains the label wall-wall iff v1 and v2 are both almost-vertical and adjacent.
- E obtains the label wall-ceiling iff v1 and v2 are almost-vertical and almost-horizontal respectively and the center of v2 is higher than the center of v1.
- E obtains the label wall-floor iff v1 and v2 are almost-vertical and almost-horizontal respectively and the center of v2 is lower than the center of v1.
- Rule 1.
- V obtains the label wall iff the count of wall-ceiling edges is equal or more than one and V is almost-vertical. This means every wall should be at least once connected to the ceiling.
- Rule 2.
- V obtains the label ceiling iff the count of wall-ceiling edges is more than two and the count of wall-wall is equal to zero. This means an almost-horizontal surface with wall-ceiling edges should be connected more than two times to the walls to get the ceiling label.
- Rule 3.
- V obtains the label floor iff the count of wall-floor edges is more than two and the count of wall-wall is equal to zero. This means an almost-horizontal surface with wall-floor edges should be connected more than two times to the walls to get the floor label.
4.3. Opening Detection Using the MLS Trajectory
5. Space Partitioning
5.1. Volumetric Space Partitioning
5.2. Extracting the Navigable Space
6. Door Detection Using the Trajectory
7. Results and Evaluation
8. Conclusions and Future Work
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Mozos, O.M.; Stachniss, C.; Burgard, W. Supervised Learning of Places from Range Data using AdaBoost. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 1730–1735. [Google Scholar]
- Mozos, O.M. Semantic Labeling of Places with Mobile Robots; Springer: Berlin/Heidelberg, Germany, 2010; Volume 61. [Google Scholar]
- Ikehata, S.; Yang, H.; Furukawa, Y. Structured indoor modeling. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1323–1331. [Google Scholar]
- Budroni, A.; Boehm, J. Automated 3D Reconstruction of Interiors from Point Clouds. Int. J. Archit. Comput. 2010, 8, 55–73. [Google Scholar] [CrossRef]
- Becker, S.; Peter, M.; Fritsch, D. Grammar-Supported 3d Indoor Reconstruction from Point Clouds For “As-Built” Bim. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2015, 1, 17–24. [Google Scholar] [CrossRef]
- Turner, E.; Zakhor, A. Floor plan generation and room labeling of indoor environments from laser range data. In Proceedings of the International Conference on Computer Graphics Theory and Applications, Lisbon, Portugal, 5–8 January 2014. [Google Scholar]
- Ochmann, S.; Vock, R.; Wessel, R.; Klein, R. Automatic reconstruction of parametric building models from indoor point clouds. Comput. Graph. 2016, 54, 94–103. [Google Scholar] [CrossRef]
- Mura, C.; Mattausch, O.; Jaspe Villanueva, A.; Gobbetti, E.; Pajarola, R. Automatic room detection and reconstruction in cluttered indoor environments with complex room layouts. Comput. Graph. 2014, 44, 20–32. [Google Scholar] [CrossRef] [Green Version]
- Bormann, R.; Jordan, F.; Li, W.; Hampp, J.; Hägele, M. Room segmentation: Survey, implementation, and analysis. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1019–1026. [Google Scholar]
- Mura, C.; Mattausch, O.; Pajarola, R. Piecewise-planar Reconstruction of Multi-room Interiors with Arbitrary Wall Arrangements. Comput. Graph. Forum 2016, 35, 179–188. [Google Scholar] [CrossRef]
- Elseicy, A. Semantic Enrichment of Indoor Mobile Laser Scanner Point Clouds and Trajectories. 2018, pp. 31–48. Available online: https://library.itc.utwente.nl/papers_2018/msc/gfm/ElSeicy.pdf (accessed on 21 September 2018).
- Zheng, Y.; Peter, M.; Zhong, R.; Oude Elberink, S.; Zhou, Q. Space Subdivision in Indoor Mobile Laser Scanning Point Clouds Based on Scanline Analysis. Sensors 2018, 18, 1838. [Google Scholar] [CrossRef] [PubMed]
- Foster, P.; Sun, Z.; Park, J.J.; Kuipers, B. VisAGGE: Visible angle grid for glass environments. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 2213–2220. [Google Scholar]
- Koch, R.; May, S.; Murmann, P.; Nüchter, A. Identification of transparent and specular reflective material in laser scans to discriminate affected measurements for faultless robotic SLAM. Robot. Auton. Syst. 2017, 87, 296–312. [Google Scholar] [CrossRef]
- RIEGL-Terrestrial Scanning. Available online: http://www.riegl.com/nc/products/terrestrial-scanning/ (accessed on 11 January 2018).
- FARO Focus|FARO. Available online: https://www.faro.com/products/construction-bim-cim/faro-focus/ (accessed on 11 January 2018).
- Cartographer ROS Integration—Cartographer ROS 1.0.0 Documentation. Available online: http://google-cartographer-ros.readthedocs.io/en/latest/ (accessed on 10 January 2018).
- Leica Pegasus: Backpack Wearable Mobile Mapping Solution. Available online: https://leica-geosystems.com/en/products/mobile-sensor-platforms/capture-platforms/leica-pegasus-backpack (accessed on 10 January 2018).
- NavVis|M3 Trolley. Available online: http://www.navvis.com/m3trolley (accessed on 10 January 2018).
- VIAMETRIS-Mobile Mapping Technology. Available online: http://www.viametris.com/ (accessed on 10 January 2018).
- GeoSLAM—The Experts in “Go-Anywhere” 3D Mobile Mapping Technology. Available online: https://geoslam.com/ (accessed on 10 January 2018).
- Matterport 3D Models of Real Interior Spaces. Available online: http://matterport.com (accessed on 4 January 2018).
- Tango. Available online: https://developers.google.com/tango/ (accessed on 11 January 2018).
- Lehtola, V.V.; Kaartinen, H.; Nüchter, A.; Kaijaluoto, R.; Kukko, A.; Litkey, P.; Honkavaara, E.; Rosnell, T.; Vaaja, M.T.; Virtanen, J.-P. Comparison of the Selected State-Of-The-Art 3D Indoor Scanning and Point Cloud Generation Methods. Remote Sens. 2017, 9, 796. [Google Scholar] [CrossRef]
- Vosselman, G. Design of an indoor mapping system using three 2D laser scanners and 6 DOF SLAM. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 2, 173. [Google Scholar] [CrossRef]
- Xiao, J.; Furukawa, Y. Reconstructing the world’s museums. Int. J. Comput. Vis. 2014, 110, 243–258. [Google Scholar] [CrossRef]
- Stiny, G. Spatial Relations and Grammars. Environ. Plan. B Plan. Des. 1982, 9, 113–114. [Google Scholar] [CrossRef]
- Gips, J.; Stiny, G. Production Systems and Grammars: A Uniform Characterization. Environ. Plan. B Plan. Des. 1980, 7, 399–408. [Google Scholar] [CrossRef]
- Tran, H.; Khoshelham, K.; Kealy, A.; Díaz-Vilariño, L. Extracting Topological Relations between Indoor Spaces from Point Clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 4, 401. [Google Scholar] [CrossRef]
- Peter, M. Modelling of Indoor Environments Using Lindenmayer Systems. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017. [Google Scholar] [CrossRef]
- Wonka, P.; Wimmer, M.; Sillion, F.; Ribarsky, W. Instant Architecture. In ACM SIGGRAPH 2003 Papers, Proceedings of the ACM Special Interest Group on Computer Graphics and Interactive Techniques (SIGGRAPH ’03), San Diego, CA, USA, 27–31 July 2003; ACM: New York, NY, USA, 2003; pp. 669–677. [Google Scholar]
- Müller, P.; Wonka, P.; Haegler, S.; Ulmer, A.; Van Gool, L. Procedural Modeling of Buildings. In ACM SIGGRAPH 2006 Papers, Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference (SIGGRAPH ’06), Boston, MA, USA, 30 July–3 August 2006; ACM: New York, NY, USA, 2006; pp. 614–623. [Google Scholar]
- Bokeloh, M.; Wand, M.; Seidel, H.-P. A Connection between Partial Symmetry and Inverse Procedural Modeling. In ACM SIGGRAPH 2010 Papers, Proceedings of the Special Interest Group on Computer Graphics and Interactive Techniques Conference (SIGGRAPH ’10), Los Angeles, CA, USA, 26–30 July 2010; ACM: New York, NY, USA, 2010; p. 104. [Google Scholar]
- Martinovic, A.; Van Gool, L. Bayesian Grammar Learning for Inverse Procedural Modeling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 201–208. [Google Scholar]
- Khoshelham, K.; Diaz-Vilarino, L. 3D Modeling of Interior Spaces: Learning the Language of Indoor Architecture. In Proceedings of the ISPRS Technical Commission V Symposium, Riva del Garda, Italy, 23–25 June 2014; Volume 2325. [Google Scholar]
- Gröger, G.; Plümer, L. Derivation of 3D Indoor Models by Grammars for Route Planning. Photogramm. Fernerkund. Geoinf. 2010, 2010, 191–206. [Google Scholar] [CrossRef]
- Oesau, S.; Lafarge, F.; Alliez, P. Indoor scene reconstruction using feature sensitive primitive extraction and graph-cut. ISPRS J. Photogramm. Remote Sens. 2014, 90, 68–82. [Google Scholar] [CrossRef] [Green Version]
- Previtali, M.; Barazzetti, L.; Brumana, R.; Scaioni, M. Towards automatic indoor reconstruction of cluttered building rooms from point clouds. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 1, 281–288. [Google Scholar] [CrossRef]
- Chauve, A.-L.; Labatut, P.; Pons, J.-P. Robust piecewise-planar 3D reconstruction and completion from large-scale unstructured point data. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 1261–1268. [Google Scholar]
- Boulch, A.; De La Gorce, M.; Marlet, R. Piecewise-Planar 3D Reconstruction with Edge and Corner Regularization. Comput. Graph. Forum 2014, 33, 55–64. [Google Scholar] [CrossRef] [Green Version]
- Adan, A.; Huber, D. 3D Reconstruction of Interior Wall Surfaces under Occlusion and Clutter. In Proceedings of the 2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), Hangzhou, China, 16–20 May 2011; pp. 275–281. [Google Scholar]
- Xiong, X.; Adan, A.; Akinci, B.; Huber, D. Automatic creation of semantically rich 3D building models from laser scanner data. Autom. Constr. 2013, 31, 325–337. [Google Scholar] [CrossRef] [Green Version]
- Rusu, R.B. Identifying and Opening Doors. In Semantic 3D Object Maps for Everyday Robot Manipulation; Springer Tracts in Advanced Robotics; Springer: Berlin/Heidelberg, Germany, 2013; pp. 161–175. ISBN 978-3-642-35478-6. [Google Scholar]
- Diaz-Vilarino, L.; Verbree, E.; Zlatanova, S.; Diakité, A. Indoor modelling from SLAM-based laser scanner: Door detection to envelope reconstruction. ISPRS-Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 345–352. [Google Scholar] [CrossRef]
- Quintana, B.; Prieto, S.A.; Adán, A.; Bosché, F. Door detection in 3D coloured point clouds of indoor environments. Autom. Constr. 2018, 85, 146–166. [Google Scholar] [CrossRef]
- Diaz-Vilarino, L.; Khoshelham, K.; Martínez-Sánchez, J.; Arias, P. 3D Modeling of Building Indoor Spaces and Closed Doors from Imagery and Point Clouds. Sensors 2015, 15, 3491–3512. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Elseicy, A.; Nikoohemat, S.; Peter, M.; Oude Elberink, S. Space Subdivision of Indoor Mobile Laser Scanning Data Based on The Scanner Trajectory. Remote Sens. 2018, 18, 1838. [Google Scholar]
- Rusu, R.B. Table Cleaning in Dynamic Environments. In Semantic 3D Object Maps for Everyday Robot Manipulation; Springer Tracts in Advanced Robotics; Springer: Berlin/Heidelberg, Germany, 2013; pp. 149–159. ISBN 978-3-642-35478-6. [Google Scholar]
- Rusu, R.B.; Marton, Z.C.; Blodow, N.; Holzbach, A.; Beetz, M. Model-based and learned semantic object labeling in 3D point cloud maps of kitchen environments. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), St. Louis, MO, USA, 11–15 October 2009; pp. 3601–3608. [Google Scholar]
- Wolf, D.; Prankl, J.; Vincze, M. Fast semantic segmentation of 3D point clouds using a dense CRF with learned parameters. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 25–30 May 2015; pp. 4867–4873. [Google Scholar]
- Silberman, N.; Fergus, R. Indoor scene segmentation using a structured light sensor. In Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops), Barcelona, Spain, 6–13 November 2011; pp. 601–608. [Google Scholar]
- Armeni, I.; Sener, O.; Zamir, A.R.; Jiang, H.; Brilakis, I.; Fischer, M.; Savarese, S. 3D Semantic Parsing of Large-Scale Indoor Spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1534–1543. [Google Scholar]
- Mattausch, O.; Panozzo, D.; Mura, C.; Sorkine-Hornung, O.; Pajarola, R. Object detection and classification from large-scale cluttered indoor scans: Object detection and classification. Comput. Graph. Forum 2014, 33, 11–21. [Google Scholar] [CrossRef]
- Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. Proc. Comput. Vis. Pattern Recognit. (CVPR) 2017, 1, 4. [Google Scholar]
- Macher, H.; Landes, T.; Grussenmeyer, P. From Point Clouds to Building Information Models: 3D Semi-Automatic Reconstruction of Indoors of Existing Buildings. Appl. Sci. 2017, 7, 1030. [Google Scholar] [CrossRef]
- Barazzetti, L. Parametric as-built model generation of complex shapes from point clouds. Adv. Eng. Inform. 2016, 30, 298–311. [Google Scholar] [CrossRef]
- Jung, J.; Hong, S.; Jeong, S.; Kim, S.; Cho, H.; Hong, S.; Heo, J. Productive modeling for development of as-built BIM of existing indoor structures. Autom. Constr. 2014, 42, 68–77. [Google Scholar] [CrossRef]
- Loch-Dehbi, S.; Dehbi, Y.; Plümer, L. Estimation of 3D Indoor Models with Constraint Propagation and Stochastic Reasoning in the Absence of Indoor Measurements. ISPRS Int. J. Geo-Inf. 2017, 6, 90. [Google Scholar] [CrossRef]
- Bosse, M.; Zlot, R.; Flick, P. Zebedee: Design of a spring-mounted 3-d range sensor with application to mobile mapping. IEEE Trans. Robot. 2012, 28, 1104–1119. [Google Scholar] [CrossRef]
- Scanning Rangefinder Distance Data Output/UTM-30LX Product Details|HOKUYO AUTOMATIC CO., LTD. Available online: https://www.hokuyo-aut.jp/search/single.php?serial=169 (accessed on 25 April 2018).
- Vosselman, G.; Gorte, B.G.; Sithole, G.; Rabbani, T. Recognising structure in laser scanner point clouds. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2004, 46, 33–38. [Google Scholar]
- Kada, M. Generalisation of 3D building models by cell decomposition and primitive instancing. In Proceedings of the Joint ISPRS Workshop on “Visualization and Exploration of Geospatial Data”, Stuttgart, Germany, 27–29 June 2007. [Google Scholar]
- Turner, E.; Cheng, P.; Zakhor, A. Fast, Automated, Scalable Generation of Textured 3D Models of Indoor Environments. IEEE J. Sel. Top. Signal Process. 2015, 9, 409–421. [Google Scholar] [CrossRef]
Dataset | # Points | MLS Device | #Rooms/#Detected | #Doors/#Detected | Clutter and Glass |
---|---|---|---|---|---|
Fire Brigade level 1 | 2.9 M | Zeb1 | 9/8 | 8/7 | High |
Fire Brigade level 2 | 3.6 M | Zeb1 | 16/14 | 17/12 | High |
Cadastre Building | 4.1 M | NavVis Trolley | 10/9 | 7/5 | High |
TU Braunschweig | 1.7 M | ITC backpack | 30/27 | 30/29 | Low |
TU Delft Architecture | 3.2 M | ZebRevo | 18/13 | 25/18 | High |
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Nikoohemat, S.; Peter, M.; Oude Elberink, S.; Vosselman, G. Semantic Interpretation of Mobile Laser Scanner Point Clouds in Indoor Scenes Using Trajectories. Remote Sens. 2018, 10, 1754. https://doi.org/10.3390/rs10111754
Nikoohemat S, Peter M, Oude Elberink S, Vosselman G. Semantic Interpretation of Mobile Laser Scanner Point Clouds in Indoor Scenes Using Trajectories. Remote Sensing. 2018; 10(11):1754. https://doi.org/10.3390/rs10111754
Chicago/Turabian StyleNikoohemat, Shayan, Michael Peter, Sander Oude Elberink, and George Vosselman. 2018. "Semantic Interpretation of Mobile Laser Scanner Point Clouds in Indoor Scenes Using Trajectories" Remote Sensing 10, no. 11: 1754. https://doi.org/10.3390/rs10111754
APA StyleNikoohemat, S., Peter, M., Oude Elberink, S., & Vosselman, G. (2018). Semantic Interpretation of Mobile Laser Scanner Point Clouds in Indoor Scenes Using Trajectories. Remote Sensing, 10(11), 1754. https://doi.org/10.3390/rs10111754