[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sensed Data and Processing Methodologies for 3D Virtual Reconstruction and Visualization of Complex Architectures

A special issue of Remote Sensing (ISSN 2072-4292).

Deadline for manuscript submissions: closed (30 November 2015) | Viewed by 187536

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Cartographic and Land Engineering Department, Higher Polytechnic School of Avila, University of Salamanca, Hornos Caleros, 50 05003 Avila, Spain
Interests: photogrammetry; laser scanning; 3D modeling; topography; cartography
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
3D Optical Metrology (3DOM) Unit, Bruno Kessler Foundation (FBK), 38123 Trento, Italy
Interests: geomatics; mapping; UAV
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, the topic of 3D reconstruction and modeling of complex architectures from remotely acquired multiple data sources has been of growing interest. This “democratization” of 3D modeling processes and the large availability of data is, however, not always followed by reliable, affordable and powerful tools for realizing photo-realistic, metric, re-usable and semantic-aware 3D products. This should be a motivation to research, design, develop and validate novel easy-to-use, ease-to-learn and a low-cost framework for 3D modeling and further understanding of virtual environments using multiple data sources, so that the whole 3D modeling community has access to an affordable, transferable, functional and usable framework of methods and tools. This challenge causes several problems that should be addressed: from improving and testing the technical capabilities of new capturing devices, to the solution of problems resultant from large image blocks, from delivering Building Information Modeling (BIM) standards in order to provide new management approaches to replacing existing visualization tools with new working experiences such as Virtual and Augmented Reality or game-engine technology.

This Special Issue stems from from the ISPRS 3D-ARCH workshop series (http://3d-arch.org) and will accept improved and extended selected papers derived from the workshop proceedings, as well as new contributions from international colleagues.

We invite you to submit articles on the following topics:

  • Multi-source data and multi-sensors integration for advanced terrestrial 3D modeling
  • New low-cost sensors and open-source algorithms for terrestrial 3D modeling
  • Automation in data registration
  • Dense image matching and automated 3D reconstruction
  • Point cloud analysis and feature extraction
  • Novel methods for the generation of structured 3D data from unstructured point clouds
  • Procedural modeling and HBIM
  • Accuracy requirement and assessment in 3D reconstructions
  • Virtual and Augmented Reality applied to the visualization and conservation of complex architectures and heritage

Authors are required to check and follow specific Instructions to Authors, see https://dl.dropboxusercontent.com/u/165068305/Remote_Sensing-Additional_Instructions.pdf

Dr. Diego Gonzalez-Aguilera
Dr. Fabio Remondino
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (19 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

5783 KiB  
Article
A Multi-View Dense Point Cloud Generation Algorithm Based on Low-Altitude Remote Sensing Images
by Zhenfeng Shao, Nan Yang, Xiongwu Xiao, Lei Zhang and Zhe Peng
Remote Sens. 2016, 8(5), 381; https://doi.org/10.3390/rs8050381 - 4 May 2016
Cited by 39 | Viewed by 8128
Abstract
This paper presents a novel multi-view dense point cloud generation algorithm based on low-altitude remote sensing images. The proposed method was designed to be especially effective in enhancing the density of point clouds generated by Multi-View Stereo (MVS) algorithms. To overcome the limitations [...] Read more.
This paper presents a novel multi-view dense point cloud generation algorithm based on low-altitude remote sensing images. The proposed method was designed to be especially effective in enhancing the density of point clouds generated by Multi-View Stereo (MVS) algorithms. To overcome the limitations of MVS and dense matching algorithms, an expanded patch was set up for each point in the point cloud. Then, a patch-based Multiphoto Geometrically Constrained Matching (MPGC) was employed to optimize points on the patch based on least square adjustment, the space geometry relationship, and epipolar line constraint. The major advantages of this approach are twofold: (1) compared with the MVS method, the proposed algorithm can achieve denser three-dimensional (3D) point cloud data; and (2) compared with the epipolar-based dense matching method, the proposed method utilizes redundant measurements to weaken the influence of occlusion and noise on matching results. Comparison studies and experimental results have validated the accuracy of the proposed algorithm in low-altitude remote sensing image dense point cloud generation. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Diagrammatic sketch of the multi-view dense point cloud generation algorithm. (<b>a</b>) The result of the seed patch generated from PMVS; (<b>b</b>) The expanded patch from the PMVS patch; (<b>c</b>) The optimized patch to improve accuracy.</p>
Full article ">Figure 2
<p>(<b>a</b>) Projection relationship between pixels (grids) in image window and object points (dots) in patch; (<b>b</b>) Process of patch-based point cloud expansion algorithm.</p>
Full article ">Figure 3
<p>Process of point cloud optimization algorithm.</p>
Full article ">Figure 4
<p>Sample input images of all the data sets used in the experiments. (<b>a</b>) Northwestern University; (<b>b</b>) Yangjiang; (<b>c</b>) Hainan.</p>
Full article ">Figure 5
<p>Examples of reconstructed point cloud with Northwestern University images illustrated by software MeshLab. (<b>a</b>) Point cloud generated by PMVS; (<b>b</b>) Details of (<b>a</b>) in red; (<b>c</b>) Point cloud generated by SURE; (<b>d</b>) Details of (<b>c</b>) in red; (<b>e</b>) Point cloud generated by Pix4D; (<b>f</b>) Details of (<b>e</b>) in red; (<b>g</b>) Point cloud generated by proposed method; (<b>h</b>) Details of (<b>g</b>) in red.</p>
Full article ">Figure 6
<p>Examples of reconstructed point cloud with Yangjiang images illustrated by software MeshLab. (<b>a</b>) Point cloud generated by PMVS; (<b>b</b>) Details of (a) in red; (<b>c</b>) Point cloud generated by SURE; (<b>d</b>) Details of (c) in red; (<b>e</b>) Point cloud generated by Pix4D; (<b>f</b>) Details of (e) in red; (<b>g</b>) Point cloud generated by proposed method; (<b>h</b>) Details of (g) in red.</p>
Full article ">Figure 7
<p>Examples of reconstructed point cloud with Hainan images illustrated by software MeshLab. (<b>a</b>) Point cloud generated by PMVS; (<b>b</b>) Details of (<b>a</b>) in red; (<b>c</b>) Point cloud generated by SURE; (<b>d</b>) Details of (<b>c</b>) in red; (<b>e</b>) Point cloud generated by Pix4D; (<b>f</b>) Details of (<b>e</b>) in red; (<b>g</b>) Point cloud generated by proposed method; (<b>h</b>) Details of (<b>g</b>) in red.</p>
Full article ">
20017 KiB  
Article
Fine Surveying and 3D Modeling Approach for Wooden Ancient Architecture via Multiple Laser Scanner Integration
by Qingwu Hu, Shaohua Wang, Caiwu Fu, Mingyao Ai, Dengbo Yu and Wende Wang
Remote Sens. 2016, 8(4), 270; https://doi.org/10.3390/rs8040270 - 25 Mar 2016
Cited by 42 | Viewed by 10595
Abstract
A multiple terrestrial laser scanner (TLS) integration approach is proposed for the fine surveying and 3D modeling of ancient wooden architecture in an ancient building complex of Wudang Mountains, which is located in very steep surroundings making it difficult to access. Three-level TLS [...] Read more.
A multiple terrestrial laser scanner (TLS) integration approach is proposed for the fine surveying and 3D modeling of ancient wooden architecture in an ancient building complex of Wudang Mountains, which is located in very steep surroundings making it difficult to access. Three-level TLS with a scalable measurement distance and accuracy is presented for data collection to compensate for data missed because of mutual sheltering and scanning view limitations. A multi-scale data fusion approach is proposed for data registration and filtering of the different scales and separated 3D data. A point projection algorithm together with point cloud slice tools is designed for fine surveying to generate all types of architecture maps, such as plan drawings, facade drawings, section drawings, and doors and windows drawings. The section drawings together with slicing point cloud are presented for the deformation analysis of the building structure. Along with fine drawings and laser scanning data, the 3D models of the ancient architecture components are built for digital management and visualization. Results show that the proposed approach can achieve fine surveying and 3D documentation of the ancient architecture within 3 mm accuracy. In addition, the defects of scanning view and mutual sheltering can overcome to obtain the complete and exact structure in detail. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Ancient architecture of Liangyi Temple: (<b>a</b>) front side, (<b>b</b>) side face. The red dashed rectangle in (<b>a</b>) is the Liangyi Temple. The red dashed rectangle in (<b>b</b>) is the gate door of Liangyi Temple. The red circle is Longtou Incense.</p>
Full article ">Figure 2
<p>Blind corner in scanning of two stations.</p>
Full article ">Figure 3
<p>Proposed idea for the fine mapping and 3D documentation of Liangyi Temple.</p>
Full article ">Figure 4
<p>Workflow of field data collection.</p>
Full article ">Figure 5
<p>Long-distance laser scanner was used to acquire 3D data of the cliff terrain, and the scanning stations were located throughout the whole building (<b>a</b>); (<b>b</b>) scanning work scene on the platform; (<b>c</b>) 3D point cloud data with color information.</p>
Full article ">Figure 6
<p>3D data collection for the building. (<b>a</b>) Scanning for the gate; (<b>b</b>) scanning for the corridor; (<b>c</b>) scanning for the Longtou Incense; (<b>d</b>) scanning for the roof; (<b>e</b>) scanning for the statues; (<b>f</b>) point cloud of the whole building; (<b>g</b>) scanning for the attic.</p>
Full article ">Figure 7
<p>3D data collection for some components with the handheld scanner. (<b>a</b>) Small statue scanning; (<b>b</b>) rows of horned beast scanning; (<b>c</b>) large statue scanning; (<b>d</b>) corner horned beast scanning.</p>
Full article ">Figure 8
<p>3D Data registration approach for the point cloud of a different scanner.</p>
Full article ">Figure 9
<p>Noise filtering and data repair of the component point cloud: (<b>a</b>) original point cloud; (<b>b</b>) hole in detail; (<b>c</b>) result of noise filtering; (<b>d</b>) result after filtering and repairing; (<b>e</b>) result of data repair.</p>
Full article ">Figure 9 Cont.
<p>Noise filtering and data repair of the component point cloud: (<b>a</b>) original point cloud; (<b>b</b>) hole in detail; (<b>c</b>) result of noise filtering; (<b>d</b>) result after filtering and repairing; (<b>e</b>) result of data repair.</p>
Full article ">Figure 10
<p>Point cloud projection for the plane drawing of the building roof. (<b>a</b>) Point cloud projection; (<b>b</b>) plane drawing based on point cloud projection.</p>
Full article ">Figure 11
<p>Point cloud slicing and projection.</p>
Full article ">Figure 12
<p>Fine drawings of typical components: (<b>a</b>) ridge tie beam; (<b>b</b>) crescent beam; (<b>c</b>) hip rafter; (<b>d</b>) architrave.</p>
Full article ">Figure 13
<p>Reality-based 3D modeling of historical relics. (<b>a</b>) 3D model of horned beast; (<b>b</b>) 3D model of statue; (<b>c</b>) 3D model of sculpture; (<b>d</b>) texture-mapped 3D model of Longtou Incense; (<b>e</b>) texture-mapped 3D model of horned beast.</p>
Full article ">Figure 14
<p>Reconstructive 3D modeling of the architecture. (<b>a</b>) Plan drawing of the building; (<b>b</b>) section drawing of the building; (<b>c</b>) facade drawing of the building; (<b>d</b>) reconstructive 3D model of the building; (<b>e</b>) architectural component modeling; (<b>f</b>) reconstructive 3D model of the component.</p>
Full article ">Figure 15
<p>Fine drawings of Liangyi Temple. (<b>a</b>) Plane drawings (layout); (<b>b</b>) facade drawings; (<b>c</b>) section drawings; (<b>d</b>) window drawings and bracket set drawings.</p>
Full article ">Figure 15 Cont.
<p>Fine drawings of Liangyi Temple. (<b>a</b>) Plane drawings (layout); (<b>b</b>) facade drawings; (<b>c</b>) section drawings; (<b>d</b>) window drawings and bracket set drawings.</p>
Full article ">Figure 16
<p>Reconstructive 3D model of Liangyi Temple. (<b>a</b>) Rendered with gray image; (<b>b</b>) rendered with texture mapping.</p>
Full article ">Figure 17
<p>3D scene of Liangyi Temple. (<b>a</b>) Front panoramic view; (<b>b</b>) top view; (<b>c</b>) inner view: corridor and hall.</p>
Full article ">Figure 18
<p>Deformation analysis of the column base. (<b>a</b>) Crooked column on the section drawing; (<b>b</b>) crooked direction of Liangyi Temple.</p>
Full article ">
13474 KiB  
Article
Frescoed Vaults: Accuracy Controlled Simplified Methodology for Planar Development of Three-Dimensional Textured Models
by Marco Giorgio Bevilacqua, Gabriella Caroti, Isabel Martínez-Espejo Zaragoza and Andrea Piemonte
Remote Sens. 2016, 8(3), 239; https://doi.org/10.3390/rs8030239 - 14 Mar 2016
Cited by 17 | Viewed by 7405
Abstract
In the field of documentation and preservation of cultural heritage, there is keen interest in 3D metric viewing and rendering of architecture for both formal appearance and color. On the other hand, operative steps of restoration interventions still require full-scale, 2D metric surface [...] Read more.
In the field of documentation and preservation of cultural heritage, there is keen interest in 3D metric viewing and rendering of architecture for both formal appearance and color. On the other hand, operative steps of restoration interventions still require full-scale, 2D metric surface representations. The transition from 3D to 2D representation, with the related geometric transformations, has not yet been fully formalized for planar development of frescoed vaults. Methodologies proposed so far on this subject provide transitioning from point cloud models to ideal mathematical surfaces and projecting textures using software tools. The methodology used for geometry and texture development in the present work does not require any dedicated software. The different processing steps can be individually checked for any error introduced, which can be then quantified. A direct accuracy check of the planar development of the frescoed surface has been carried out by qualified restorers, yielding a result of 3 mm. The proposed methodology, although requiring further studies to improve automation of the different processing steps, allowed extracting 2D drafts fully usable by operators restoring the vault frescoes. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Model SfM/LASER; (<b>b</b>) Model SfM/MVS; (<b>c</b>) Model LASER.</p>
Full article ">Figure 2
<p>Dense contour model (isometric view).</p>
Full article ">Figure 3
<p>Discontinuity directions and vault areas (bottom view).</p>
Full article ">Figure 4
<p>Approximation by cylindrical surface.</p>
Full article ">Figure 5
<p>Actual 3D model—Cylindrical surface error.</p>
Full article ">Figure 6
<p>20-cm step contour lines model.</p>
Full article ">Figure 7
<p>Section arc—chord comparison.</p>
Full article ">Figure 8
<p>Geometric development of contours model.</p>
Full article ">Figure 9
<p>Projection deformation.</p>
Full article ">Figure 10
<p>(<b>a</b>) Orthogonal view with section lines; (<b>b</b>) Orthogonal view with high quality texture.</p>
Full article ">Figure 11
<p>Superimposition of orthogonal view on sections model development.</p>
Full article ">Figure 12
<p>High quality textured model development.</p>
Full article ">Figure 13
<p>Regions checked for deviations between cloud LASER and model SfM/MVS.</p>
Full article ">Figure 14
<p>Region A: total plaster collapse borders.</p>
Full article ">Figure 15
<p>Region B: total plaster collapse borders.</p>
Full article ">Figure 16
<p>Region C: gap in the fresco.</p>
Full article ">Figure 17
<p>Region D: crack in the topmost region of the vault.</p>
Full article ">Figure 18
<p>CPs on the vault.</p>
Full article ">Figure 19
<p>Development accuracy assessment at 1:1 scale.</p>
Full article ">
20919 KiB  
Article
An Advanced Pre-Processing Pipeline to Improve Automated Photogrammetric Reconstructions of Architectural Scenes
by Marco Gaiani, Fabio Remondino, Fabrizio I. Apollonio and Andrea Ballabeni
Remote Sens. 2016, 8(3), 178; https://doi.org/10.3390/rs8030178 - 25 Feb 2016
Cited by 61 | Viewed by 14628
Abstract
Automated image-based 3D reconstruction methods are more and more flooding our 3D modeling applications. Fully automated solutions give the impression that from a sample of randomly acquired images we can derive quite impressive visual 3D models. Although the level of automation is reaching [...] Read more.
Automated image-based 3D reconstruction methods are more and more flooding our 3D modeling applications. Fully automated solutions give the impression that from a sample of randomly acquired images we can derive quite impressive visual 3D models. Although the level of automation is reaching very high standards, image quality is a fundamental pre-requisite to produce successful and photo-realistic 3D products, in particular when dealing with large datasets of images. This article presents an efficient pipeline based on color enhancement, image denoising, color-to-gray conversion and image content enrichment. The pipeline stems from an analysis of various state-of-the-art algorithms and aims to adjust the most promising methods, giving solutions to typical failure causes. The assessment evaluation proves how an effective image pre-processing, which considers the entire image dataset, can improve the automated orientation procedure and dense 3D point cloud reconstruction, even in the case of poor texture scenarios. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Examples of image-based 3D reconstruction results of complex architectural scenarios achieved with the presented methodology.</p>
Full article ">Figure 2
<p>Typical examples of SfM results where, despite a message of successful image orientation and a very small re-projection error, there are some cameras wrongly oriented. A non-expert user could only spot such errors with difficultly and would proceed to the successive processing stages, negatively affecting the final results.</p>
Full article ">Figure 3
<p>3D reconstruction of complex architectures without (<b>a</b>) and with (<b>b</b>) a suitable pre-processing methodology applied to improve the radiometric image quality and provide more complete and noise-free 3D point clouds.</p>
Full article ">Figure 3 Cont.
<p>3D reconstruction of complex architectures without (<b>a</b>) and with (<b>b</b>) a suitable pre-processing methodology applied to improve the radiometric image quality and provide more complete and noise-free 3D point clouds.</p>
Full article ">Figure 4
<p>The proposed image pre-processing and enhancement pipeline.</p>
Full article ">Figure 5
<p>Results of color-to-gray conversion methods: produced gray scale images (<b>a</b>) and image difference (<b>b</b>) between (from left to right) <span class="html-italic">Adobe Photoshop</span> and <span class="html-italic">Decolourize</span>, <span class="html-italic">Adobe Photoshop</span> and <span class="html-italic">Realtime</span>, <span class="html-italic">Adobe Photoshop</span> and <span class="html-italic">Matlab RGB2GRAY</span>, and <span class="html-italic">Adobe Photoshop</span> and <span class="html-italic">BID</span>.</p>
Full article ">Figure 6
<p>A comparison between <span class="html-italic">Matlab RGB2GRAY</span> (<b>left</b>) and <span class="html-italic">BID</span> (<b>right</b>) that shows a smoother histogram.</p>
Full article ">Figure 7
<p>Employed datasets (cross vault with plaster, building façade and Venetian floor, respectively) to study the automatic selection of the Wallis filter parameters.</p>
Full article ">Figure 8
<p>Results of the Wallis evaluation on the cross vault dataset. The charts report the number of extracted interest points with respect to the considered parameter values: (<b>a</b>) cross vault dataset—performance evaluation varying the standard deviation; (<b>b</b>) cross vault dataset—performance evaluation varying the window size; and (<b>c</b>) cross vault dataset—performance evaluation varying the mean.</p>
Full article ">Figure 9
<p>Example of the Wallis filtering procedure: note the much smoother and Gaussian-like histogram.</p>
Full article ">Figure 10
<p>The images used to assess the pairwise matching efficiency of the tie point extraction procedure: parallel acquisitions with short baseline (<b>a</b>,<b>b</b>); tilt of <span class="html-italic">ca.</span> 45° (<b>c</b>) and rotation of <span class="html-italic">ca.</span> 90° (<b>d</b>).</p>
Full article ">Figure 11
<p>Dataset 2 (35 images)—two arches of a portico.</p>
Full article ">Figure 12
<p>The three regions of interest (A1–A3) surveyed with TLS as ground truth.</p>
Full article ">Figure 13
<p>Dataset 3 (265 images)—a portico with arcades, columns, vaults and plastered walls.</p>
Full article ">Figure 14
<p>Example of dense 3D point cloud of a portico obtained with the presented photogrammetric methodology based on an advanced pre-processing pipeline.</p>
Full article ">
4495 KiB  
Article
Multi-Class Simultaneous Adaptive Segmentation and Quality Control of Point Cloud Data
by Ayman Habib and Yun-Jou Lin
Remote Sens. 2016, 8(2), 104; https://doi.org/10.3390/rs8020104 - 29 Jan 2016
Cited by 28 | Viewed by 6790
Abstract
3D modeling of a given site is an important activity for a wide range of applications including urban planning, as-built mapping of industrial sites, heritage documentation, military simulation, and outdoor/indoor analysis of airflow. Point clouds, which could be either derived from passive or [...] Read more.
3D modeling of a given site is an important activity for a wide range of applications including urban planning, as-built mapping of industrial sites, heritage documentation, military simulation, and outdoor/indoor analysis of airflow. Point clouds, which could be either derived from passive or active imaging systems, are an important source for 3D modeling. Such point clouds need to undergo a sequence of data processing steps to derive the necessary information for the 3D modeling process. Segmentation is usually the first step in the data processing chain. This paper presents a region-growing multi-class simultaneous segmentation procedure, where planar, pole-like, and rough regions are identified while considering the internal characteristics (i.e., local point density/spacing and noise level) of the point cloud in question. The segmentation starts with point cloud organization into a kd-tree data structure and characterization process to estimate the local point density/spacing. Then, proceeding from randomly-distributed seed points, a set of seed regions is derived through distance-based region growing, which is followed by modeling of such seed regions into planar and pole-like features. Starting from optimally-selected seed regions, planar and pole-like features are then segmented. The paper also introduces a list of hypothesized artifacts/problems that might take place during the region-growing process. Finally, a quality control process is devised to detect, quantify, and mitigate instances of partially/fully misclassified planar and pole-like features. Experimental results from airborne and terrestrial laser scanning as well as image-based point clouds are presented to illustrate the performance of the proposed segmentation and quality control framework. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Framework for the multi-class segmentation and quality control procedure.</p>
Full article ">Figure 2
<p>Representation scheme for 3D planar features; planes that are almost parallel to the <math display="inline"> <semantics> <mrow> <mi>x</mi> <mi>y</mi> <mo>−</mo> <mi>p</mi> <mi>l</mi> <mi>a</mi> <mi>n</mi> <mi>e</mi> </mrow> </semantics> </math> (<b>a</b>); planes that are almost parallel to the <math display="inline"> <semantics> <mrow> <mi>x</mi> <mi>z</mi> <mo>−</mo> <mi>p</mi> <mi>l</mi> <mi>a</mi> <mi>n</mi> <mi>e</mi> </mrow> </semantics> </math> (<b>b</b>); and planes that are almost parallel to the <math display="inline"> <semantics> <mrow> <mi>y</mi> <mi>z</mi> <mo>−</mo> <mi>p</mi> <mi>l</mi> <mi>a</mi> <mi>n</mi> <mi>e</mi> </mrow> </semantics> </math> (<b>c</b>).</p>
Full article ">Figure 3
<p>Representation scheme for 3D pole-like features; pole-like features that are almost parallel to the <math display="inline"> <semantics> <mrow> <mi>z</mi> <mo>−</mo> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics> </math> (<b>a</b>); pole-like features that are almost parallel to the <math display="inline"> <semantics> <mrow> <mi>y</mi> <mo>−</mo> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics> </math> (<b>b</b>); and pole-like features that are almost parallel to the <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>a</mi> <mi>x</mi> <mi>i</mi> <mi>s</mi> </mrow> </semantics> </math> (<b>c</b>).</p>
Full article ">Figure 4
<p>Possible segmentation artifacts; misclassified planar features (<b>a</b>,<b>b</b>); misclassified pole-like feature (<b>c</b>); partially misclassified planar features (<b>d</b>,<b>e</b>); and partially misclassified pole-like feature (<b>f</b>)—planar and pole-like features are displayed in light blue and light green, respectively, in subfigures (<b>a</b>), (<b>b</b>), (<b>c</b>), (<b>e</b>), (<b>f</b>).</p>
Full article ">Figure 5
<p>Inner and outer boundary derivation for the identification of intraclass competition for neighboring points.</p>
Full article ">Figure 6
<p>Slicing and immediate-neighbors concept for the identification of fully/partially misclassified pole-like features (<b>a</b>)/(<b>b</b>).</p>
Full article ">Figure 7
<p>Segmented planar feature—in light blue—and the encompassing MBR—in red (<b>a</b>); segmented linear features—in green (<b>b</b>); and final segmentation after the identification of partially-misclassified linear features (<b>c</b>).</p>
Full article ">Figure 8
<p>Perspective views of the point clouds from the ALS (<b>a</b>); TLS1 (<b>b</b>); TLS2 (<b>c</b>); TLS3 (<b>d</b>); and DIM (<b>e</b>) datasets.</p>
Full article ">Figure 9
<p>Perspective views of the classified point clouds for the ALS (<b>a</b>); TLS1 (<b>b</b>); TLS2 (<b>c</b>); TLS3 (<b>d</b>); and DIM (<b>e</b>) datasets (planar, pole-like, and rough regions are shown in blue, green, and red, respectively).</p>
Full article ">Figure 9 Cont.
<p>Perspective views of the classified point clouds for the ALS (<b>a</b>); TLS1 (<b>b</b>); TLS2 (<b>c</b>); TLS3 (<b>d</b>); and DIM (<b>e</b>) datasets (planar, pole-like, and rough regions are shown in blue, green, and red, respectively).</p>
Full article ">Figure 10
<p>Perspective views of the segmented point clouds for the ALS (<b>a</b>); TLS1 (<b>b</b>); TLS2 (<b>c</b>); TLS3 (<b>d</b>); and DIM (<b>e</b>) datasets (different segments are shown in different colors).</p>
Full article ">Figure 10 Cont.
<p>Perspective views of the segmented point clouds for the ALS (<b>a</b>); TLS1 (<b>b</b>); TLS2 (<b>c</b>); TLS3 (<b>d</b>); and DIM (<b>e</b>) datasets (different segments are shown in different colors).</p>
Full article ">Figure 11
<p>Perspective views of the segmented point clouds after the quality control procedure for the ALS–planar (<b>a</b>); DIM–planar (<b>b</b>); TLS1–planar (<b>c</b>); TLS1–pole-like (<b>d</b>); TLS2–planar (<b>e</b>); TLS3–planar (<b>f</b>); and TLS3–pole-like (<b>g</b>) datasets—different segments are shown in different colors.</p>
Full article ">Figure 11 Cont.
<p>Perspective views of the segmented point clouds after the quality control procedure for the ALS–planar (<b>a</b>); DIM–planar (<b>b</b>); TLS1–planar (<b>c</b>); TLS1–pole-like (<b>d</b>); TLS2–planar (<b>e</b>); TLS3–planar (<b>f</b>); and TLS3–pole-like (<b>g</b>) datasets—different segments are shown in different colors.</p>
Full article ">Figure 12
<p>Examples of improved segmentation quality by the different QC measures. (<b>a</b>) After QC1: reclassified pole-like features; (<b>b</b>) After QC2: Interclass competition (planar and pole-like); (<b>c</b>) After QC3: Over-segmentation (pole-like); (<b>d</b>) Before and after QC4: Intraclass competition (planar); (<b>e</b>) After QC5: Misclassified pole-like; (<b>f</b>) After QC6: Misclassified planar.</p>
Full article ">Figure 12 Cont.
<p>Examples of improved segmentation quality by the different QC measures. (<b>a</b>) After QC1: reclassified pole-like features; (<b>b</b>) After QC2: Interclass competition (planar and pole-like); (<b>c</b>) After QC3: Over-segmentation (pole-like); (<b>d</b>) Before and after QC4: Intraclass competition (planar); (<b>e</b>) After QC5: Misclassified pole-like; (<b>f</b>) After QC6: Misclassified planar.</p>
Full article ">
19213 KiB  
Article
Multi-View Stereo Matching Based on Self-Adaptive Patch and Image Grouping for Multiple Unmanned Aerial Vehicle Imagery
by Xiongwu Xiao, Bingxuan Guo, Deren Li, Linhui Li, Nan Yang, Jianchen Liu, Peng Zhang and Zhe Peng
Remote Sens. 2016, 8(2), 89; https://doi.org/10.3390/rs8020089 - 23 Jan 2016
Cited by 36 | Viewed by 8761
Abstract
Robust and rapid image dense matching is the key to large-scale three-dimensional (3D) reconstruction for multiple Unmanned Aerial Vehicle (UAV) images. However, the following problems must be addressed: (1) the amount of UAV image data is very large, but ordinary computer memory is [...] Read more.
Robust and rapid image dense matching is the key to large-scale three-dimensional (3D) reconstruction for multiple Unmanned Aerial Vehicle (UAV) images. However, the following problems must be addressed: (1) the amount of UAV image data is very large, but ordinary computer memory is limited; (2) the patch-based multi-view stereo-matching algorithm (PMVS) does not work well for narrow-baseline cases, and its computing efficiency is relatively low, and thus, it is difficult to meet the UAV photogrammetry’s requirements of convenience and speed. This paper proposes an Image-grouping and Self-Adaptive Patch-based Multi-View Stereo-matching algorithm (IG-SAPMVS) for multiple UAV imagery. First, multiple UAV images were grouped reasonably by a certain grouping strategy. Second, image dense matching was performed in each group and included three processes. (1) Initial feature-matching consists of two steps: The first was feature point detection and matching, which made some improvements to PMVS, according to the characteristics of UAV imagery. The second was edge point detection and matching, which aimed to control matching propagation during the expansion process; (2) The second process was matching propagation based on the self-adaptive patch. Initial patches were built that were centered by the obtained 3D seed points, and these were repeatedly expanded. The patches were prevented from crossing the discontinuous terrain by using the edge constraint, and the extent size and shape of the patches could automatically adapt to the terrain relief; (3) The third process was filtering the erroneous matching points. Taken the overlap problem between each group of 3D dense point clouds into account, the matching results were merged into a whole. Experiments conducted on three sets of typical UAV images with different texture features demonstrate that the proposed algorithm can address a large amount of UAV image data almost without computer memory restrictions, and the processing efficiency is significantly better than that of the PMVS algorithm and the matching accuracy is equal to that of the state-of-the-art PMVS algorithm. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The relevant window (black window) is consistent with the direction of the epipolar line (green dotted line).</p>
Full article ">Figure 2
<p>(<b>a</b>) Self-adaptive patch; (<b>b</b>) Schematic diagram of self-adaptive patches on the DSM.</p>
Full article ">Figure 3
<p>The xyz-plane coordinate system of the patch.</p>
Full article ">Figure 4
<p>The plane coordinates of the new points (red points) in the patch plane.</p>
Full article ">Figure 5
<p>The new 3D points (red points) are propagated by the seed point (black point).</p>
Full article ">Figure 6
<p>The flow chart of the matching propagation based on self-adaptive patch.</p>
Full article ">Figure 7
<p>Adjusting patch according to the edge.</p>
Full article ">Figure 8
<p>To set up <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>×</mo> <mi>M</mi> </mrow> </semantics> </math> regular grids in a minimum enclosing rectangle.</p>
Full article ">Figure 9
<p>The tracking map of the 67 images under freenet.</p>
Full article ">Figure 10
<p>The merged 3D dense point cloud of Northwest University.</p>
Full article ">Figure 11
<p>Comparison of the same areas in the 3D dense point clouds and the corresponding images. (<b>a</b>) The building area; (<b>b</b>) the flat area.</p>
Full article ">Figure 12
<p>Final 3D point cloud results of the Northwest University campus using IG-SAPMVS (<b>a</b>) and PMVS (<b>b</b>).</p>
Full article ">Figure 13
<p>The GPS tracking map of the UAV images taken in remote mountains under geodetic control network.</p>
Full article ">Figure 14
<p>The merged 3D dense point cloud of the remote mountains. (<b>a</b>) The plan view of the merged 3D dense point cloud; (<b>b</b>) The side views of the merged 3D point-cloud.</p>
Full article ">Figure 15
<p>Final 3D point cloud results of the remote mountains using IG-SAPMVS (<b>a</b>) and PMVS (<b>b</b>).</p>
Full article ">Figure 16
<p>The Vaihingen test areas.</p>
Full article ">Figure 17
<p>The three test sites in Vaihingen. (<b>a</b>) a<sub>1</sub>-a<sub>8</sub>: the eight cut images of the “Inner City” from the original images: 10030061.jpg, 10030062.jpg, 10040083.jpg, 10040084.jpg, 10050105.jpg, 10050106.jpg, 10250131.jpg, 10250132.jpg, respectively; (<b>b</b>) b<sub>1</sub>-b<sub>4</sub>: the four cut images of the “High Riser” from the original images: 10040082.jpg, 10040083.jpg, 10050104.jpg, 10050105.jpg, respectively; (<b>c</b>) c<sub>1</sub>-c<sub>6</sub>: the six cut images of the “Residential Area” from the original images: 10250134.jpg, 10250133.jpg, 10040083.jpg, 10040084.jpg, 10050105.jpg, 10050106.jpg, respectively.</p>
Full article ">Figure 18
<p>The airborne laser scanner data of the experimental region (Vaihingen).</p>
Full article ">Figure 19
<p>Final 3D point cloud results for the three sets of cut images. (<b>a</b>) Area 1: “Inner City”; (<b>b</b>) Area 2: “High Riser”; (<b>c</b>) Area 3: “Residential Area”.</p>
Full article ">
7259 KiB  
Article
Multispectral Radiometric Analysis of Façades to Detect Pathologies from Active and Passive Remote Sensing
by Susana Del Pozo, Jesús Herrero-Pascual, Beatriz Felipe-García, David Hernández-López, Pablo Rodríguez-Gonzálvez and Diego González-Aguilera
Remote Sens. 2016, 8(1), 80; https://doi.org/10.3390/rs8010080 - 21 Jan 2016
Cited by 48 | Viewed by 8439
Abstract
This paper presents a radiometric study to recognize pathologies in façades of historical buildings by using two different remote sensing technologies covering part of the visible and very near infrared spectrum (530–905 nm). Building materials deteriorate over the years due to different extrinsic [...] Read more.
This paper presents a radiometric study to recognize pathologies in façades of historical buildings by using two different remote sensing technologies covering part of the visible and very near infrared spectrum (530–905 nm). Building materials deteriorate over the years due to different extrinsic and intrinsic agents, so assessing these affections in a non-invasive way is crucial to help preserve them since in many cases they are valuable and some have been declared monuments of cultural interest. For the investigation, passive and active remote acquisition systems were applied operating at different wavelengths. A 6-band Mini-MCA multispectral camera (530–801 nm) and a FARO Focus3D terrestrial laser scanner (905 nm) were used with the dual purpose of detecting different materials and damages on building façades as well as determining which acquisition system and spectral range is more suitable for this kind of studies. The laser scan points were used as base to create orthoimages, the input of the two different classification processes performed. The set of all orthoimages from both sensors was classified under supervision. Furthermore, orthoimages from each individual sensor were automatically classified to compare results from each sensor with the reference supervised classification. Higher overall accuracy with the FARO Focus3D, 74.39%, was obtained with respect to the Mini MCA6, 66.04%. Finally, after applying the radiometric calibration, a minimum improvement of 24% in the image classification results was obtained in terms of overall accuracy. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The workflow of the methodology presented. Acronyms: RC = Radiometric calibration and Mp = Millions of points.</p>
Full article ">Figure 2
<p>ASD FieldSpec3 spectroradiometer collecting spectral radiation reflected from (<b>a</b>) the Spectralon target and (<b>b</b>) mortar between contiguous stones of the examined façade.</p>
Full article ">Figure 3
<p>Sketch of the test performed to analyze the internal radiometric behavior of the FARO Focus3D.</p>
Full article ">Figure 4
<p>FARO Focus3D backscattered intensity behavior for the measurements of the four Spectralon panels at 1 m distance increments related to the signal attenuation (Equation (3)).</p>
Full article ">Figure 5
<p>Relationship between TLS raw intensity data and reflectance for each spectralon panel at 10 m distance.</p>
Full article ">Figure 6
<p>South façade of the Church of San Segundo in Ávila (Spain) (<b>left</b>) and a sketch of the acquisition setup with the different sensor’s stations (MCA6-multispectral camera, FARO Focus3D) (<b>right</b>).</p>
Full article ">Figure 7
<p>Set of 7 orthoimages of the façade in reflectance values from the two analyzed sensors (MCA6 multispectral camera and FARO Focus3D) and a false colorcolor composite orthoimage.</p>
Full article ">Figure 8
<p>Mini MCA6 map for the 5-clusters unsupervised classification.</p>
Full article ">Figure 9
<p>FARO Focus3D map for the 5-clusters unsupervised classification.</p>
Full article ">Figure 10
<p>Multisensory map for the 5 informational classes supervised classification.</p>
Full article ">Figure 11
<p>Spectral signatures of the two different types of granites, (<b>a</b>) unaltered and (<b>b</b>) altered, measured with the ASD spectroradiometer for the wavelength interval covered by the sensors used (Mini MCA6 and FARO Focus3D) where points are obtained from the orthoimages in reflectance values.</p>
Full article ">
5924 KiB  
Article
Photogrammetric, Geometrical, and Numerical Strategies to Evaluate Initial and Current Conditions in Historical Constructions: A Test Case in the Church of San Lorenzo (Zamora, Spain)
by Luis Javier Sánchez-Aparicio, Alberto Villarino, Jesús García-Gago and Diego González-Aguilera
Remote Sens. 2016, 8(1), 60; https://doi.org/10.3390/rs8010060 - 13 Jan 2016
Cited by 22 | Viewed by 7022
Abstract
Identifying and quantifying the potential causes of damages to a construction and evaluating its current stability have become an imperative task in today’s world. However, the existence of variables, unknown conditions and a complex geometry hinder such work, by hampering the numerical results [...] Read more.
Identifying and quantifying the potential causes of damages to a construction and evaluating its current stability have become an imperative task in today’s world. However, the existence of variables, unknown conditions and a complex geometry hinder such work, by hampering the numerical results that simulate its behavior. Of the mentioned variables, the following can be highlighted: (i) the lack of historical information; (ii) the mechanical properties of the material; (iii) the initial geometry and (iv) the interaction with other structures. Within the field of remote sensors, the laser scanner and photogrammetric systems have become especially valuable for construction analysis. Such sensors are capable of providing highly accurate and dense geometrical data with which to assess a building’s condition. It is also remarkable, that the latter provide valuable radiometric data with which to identify the properties of the materials, and also evaluate and monitor crack patterns. Motivated by this, the present article investigates the potential offered by the combined use of photogrammetric techniques (DIC and SfM), as well as geometrical (NURBs and Hausdorff distance) and numerical strategies (FEM) to assess the origin of the damage (through an estimation of the initial conditions) and give an evaluation of the current stability (considering the deformation and the damage). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Detail view of the brick and speckle pattern applied during the Digital Image Correlation (DIC) test; (<b>b</b>) Histogram of the speckle pattern.</p>
Full article ">Figure 2
<p>Digital Image Correlation general outline. In red the reference subset, in blue the initial seed, and in yellow, the final location of the subset.</p>
Full article ">Figure 3
<p>Results after the experimental campaign (2D DIC). (<b>a</b>) Deformation measurement, expressed in pixels, between two captures and positioning of the virtual extensometers; (<b>b</b>) Stress-strain curve obtained with the virtual extensometer A-A’.</p>
Full article ">Figure 4
<p>(<b>a</b>) 3D model obtained by the proposed methodology; (<b>b</b>) Detail view of the most damaged section through the texture model.</p>
Full article ">Figure 5
<p>San Lorenzo church: (<b>a</b>) Orthophoto of the main façade through the methodology proposed; (<b>b</b>) Orthophoto of the west façade (chancel) of the construction; (<b>c</b>) Floor plan-view of the church, red color indicates the damaged area of the dome.</p>
Full article ">Figure 6
<p>(<b>a</b>) Constructive section of the church’s transept; (<b>b</b>) Transversal section of the dome geometry (initial state estimated by the Structure from Motion (SfM) point cloud) with dimensions in meters.</p>
Full article ">Figure 7
<p>Results of the visual inspection over the different photogrammetric products: (<b>a</b>) Surface comparison between the initial proposed model and the most deformed one estimated by the SfM point cloud; (<b>b</b>) Damage inspection in the orthophoto, in green the main observed cracks, in blue the secondary cracks, in yellow the material removal.</p>
Full article ">Figure 8
<p>(<b>a</b>) Isometric view of the mesh and the control points (nodes) used for the numerical simulations; (<b>b</b>) First principal stress distribution, expressed in N/mm<sup>2</sup> for the self-weight case; (<b>c</b>) First principal stress distribution, expressed in N/mm<sup>2</sup>, for the numerical model which considers the asymmetric load.</p>
Full article ">Figure 9
<p>Graphical distribution of the different considered symmetrical Hausdorff distance (d<span class="html-italic"><sub>SH</sub></span>) (expressed in m) for the base model.</p>
Full article ">Figure 10
<p>(<b>a</b>) First principal stress distribution, expressed in N/mm2 of the updated model; (<b>b</b>) Geometrical accuracy, in terms of Local Hausdorff metric (<span class="html-italic">LHm<sub>s</sub></span>) of the updated model; in green, values where the geometrical model improves the results, in orange values where no improvements are carried out and in red, areas where the updated numerical model displays worse behavior.</p>
Full article ">Figure 11
<p>Proposed workflow for the study of the current stability of the construction.</p>
Full article ">Figure 12
<p>(<b>a</b>) Isometric view of the considered mesh model; (<b>b</b>) Discrepancies, expressed in mm, between the Non-Uniform Rational B-Splines (NURBs) and the photogrammetric models.</p>
Full article ">Figure 13
<p>(<b>a</b>) Parametric analysis of different tensile strengths and shear retention factors; (<b>b</b>) Parametric analysis of different masonry and infill’s Young modulus; (<b>c</b>) Maximum principal stress (σ<sub>1</sub>), expressed in N/mm<sup>2</sup>, at collapse of the initial considered model.</p>
Full article ">
10494 KiB  
Article
A New Approach to the Generation of Orthoimages of Cultural Heritage Objects—Integrating TLS and Image Data
by Jakub Stefan Markiewicz, Piotr Podlasiak and Dorota Zawieska
Remote Sens. 2015, 7(12), 16963-16985; https://doi.org/10.3390/rs71215869 - 16 Dec 2015
Cited by 24 | Viewed by 7195
Abstract
This paper discusses the issue of automation of orthoimage generation based on Terrestrial Laser Scanning (TLS) data and digital images. The following two problems are discussed: automatic generation of projection planes based on TLS data, and automatic orientation of digital images in relation [...] Read more.
This paper discusses the issue of automation of orthoimage generation based on Terrestrial Laser Scanning (TLS) data and digital images. The following two problems are discussed: automatic generation of projection planes based on TLS data, and automatic orientation of digital images in relation to TLS data. The majority of popular software applications use manual definitions of projection planes. However, the authors propose an original software tool to address the first issue, which defines important planes based on a TLS point cloud utilizing different algorithms (RANdom SAmple Consensus–RANSAC, Hough transform, “region growing”). To address the second task, the authors present a series of algorithms for automated digital image orientation in relation to a point cloud. This is important in cases where scans and images are acquired from different places and at different times. The algorithms utilize Scale Invariant Feature Transform(SIFT) operators in order to find points that correspond in reflectance intensity between coloure images (Red Green Blue—RGB) and orthoimages, based on TLS data. The paper also presents a verification method using SIFT and Speeded-Up Robust Features (SURF) operators. The research results in an original tool and applied Computer Vision(CV) algorithms that improve the process of orthoimage generation. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Diagram of the performed experiments.</p>
Full article ">Figure 2
<p>Data acquired by the ZFS 5003 scanner: (<b>a</b>) an image of the reflectance level in the <b>ω</b>, <b>φ</b> polar system; (<b>b</b>) in the rectangular XZ (rotated) system; colour lines present successive rows and columns of source data.</p>
Full article ">Figure 3
<p>The “thickness” of the point cloud for data recorded by the (<b>a</b>) Z+F 5003 scanner for a wall covered with fabric and (<b>b</b>) by the Z+FS 5006 scanner for the painted wall. The yellow line presents the “row” of the scanner data (mm).</p>
Full article ">Figure 4
<p>The image space (<b>a</b>) and the diagram (<b>b</b>) in the parameter space for an individual point.</p>
Full article ">Figure 5
<p>(<b>a</b>) Intensity orthoimage—Ground Sample Distance (GSD) = 2 mm; (<b>b</b>) map of depth.</p>
Full article ">Figure 6
<p>Planes detected by the PCL (RANSAC3D) algorithm: (<b>a</b>) Points on the same planes are marked with the same colour; (<b>b</b>) The rectangles describe the first four detected planes; (<b>c</b>) Histogram of the deviation.</p>
Full article ">Figure 7
<p>Planes detected by the 2D RANSACalgorithm: (<b>a</b>) Points on the same plane are marked with the same colour (the visible errors can be explained by inclusion of points on the floor and the ceiling); (<b>b</b>) The rectangles describe the first four detected planes; (<b>c</b>) Histogram of the deviation.</p>
Full article ">Figure 8
<p>Distribution of the deviation of the normal vector (N<sub>y</sub> component), expressed in degrees, determined for the neighbourhood of 10 × 10 mm. Three maximum values are visible in the histogram; they correspond to the floor (in blue), the mirror above the fireplace (in yellow) and the wall (in red). It can be seen that the niche area of the door was not separated, although it is located at a different depth.</p>
Full article ">Figure 9
<p>Points for which the deviation of the N<sub>y</sub> normal vector does not exceed 1 degree.</p>
Full article ">Figure 10
<p>Example of 2D Hough transform: (<b>a</b>) The XY image—the brightness corresponds to the number of points included in the individual cell; the cell size has been assumed as 20 × 20 mm; (<b>b</b>) The parameter space when the counter is incremented by 1; (<b>c</b>) The parameter space when the counter is incremented by the number of points in a cell (the original algorithm)—the visible maximum corresponds to the direction of the wall; (<b>d</b>) The line corresponding to the maximum value from the image in (<b>c</b>), marked on the XY image.</p>
Full article ">Figure 11
<p>Planes detected by the 2D Hough transform: (<b>a</b>) Points on the same plane are marked with the same colour (the visible errors can be explained by inclusion of points on the floor and the ceiling); (<b>b</b>) The rectangles describe the first three detected planes; (<b>c</b>) Histogram of the deviation.</p>
Full article ">Figure 12
<p>Example cross-sections of the 3D image utilized in the 3D Hough transform (the brightness corresponds to the number of points included in an individual cell): (<b>a</b>) XY image; (<b>b</b>) XZ image; (<b>c</b>) YZ image.</p>
Full article ">Figure 13
<p>Example cross-sections of the domain of 3D parameters determined by the 3D Hough transform. The first three drawings show sections in the ω-r plane for the different parameters: φ: (<b>a</b>) φ = 0°; (<b>b</b>) φ = 45°; (<b>c</b>) φ = 90°. The next three show sections in the φ-r plane for the different parameters ω: (<b>d</b>) ω = 0°; (<b>e</b>) ω = 45°; (<b>f</b>) ω = 90°.</p>
Full article ">Figure 14
<p>Planes detected by the 3D Hough algorithm: (<b>a</b>) Points on the same plane are marked with the same colour; (<b>b</b>) The rectangles describe the first four detected planes; (<b>c</b>) Histogram of the deviation.</p>
Full article ">Figure 15
<p>Results of 2D Hough algorithm for an object with many planes: (<b>a</b>) the raw point cloud; (<b>b</b>) detected points belonging to two different planes.</p>
Full article ">Figure 16
<p>Examples of used images: (<b>a</b>) Variant I: one image covering the entire orthoimage area; (<b>b</b>) Variant II: part of multiple images covering the entire orthoimage area.</p>
Full article ">Figure 17
<p>Example of utilized image with control points in green and points eliminated from the adjustment process in red; a yellow “o”—a check point.</p>
Full article ">Figure 18
<p>Histograms of control-and check-point deviation: (<b>a</b>) X direction in pixels—control; (<b>b</b>) Y direction in pixels—control; (<b>c</b>) X direction in pixels—check; (<b>d</b>) Y direction in pixels—check.</p>
Full article ">Figure 19
<p>Distribution of points detected in the single image (from a group) and in the orthoimage: green “o”—control points; red “+”—wrong matched points; yellow “o”—check points.</p>
Full article ">Figure 20
<p>Histograms of control- and check-point deviations: (<b>a</b>) X direction in pixels-control; (<b>b</b>) Y direction in pixels control; (<b>c</b>) X direction in pixels-check; (<b>d</b>) Y direction in pixels-check.</p>
Full article ">Figure 21
<p>Histograms of control- and check points deviations: (<b>a</b>) X direction in pixels-control; (<b>b</b>) Y direction in pixels-control; (<b>c</b>) X direction in pixels-check; (<b>d</b>) Y direction in pixels-check.</p>
Full article ">Figure 22
<p>Example of utilized image with points used for orientation (control points, in green) and yellow “o”—check points.</p>
Full article ">Figure 23
<p>Histograms of deviations of control and check points: (<b>a</b>) X direction in pixels-control; (<b>b</b>) Y direction in pixels-control; (<b>c</b>) X direction in pixels-check; (<b>d</b>) Y direction in pixels-check.</p>
Full article ">Figure 24
<p>RGB orthoimage with marked overlaps and seam lines.</p>
Full article ">Figure 25
<p>An RGB image with displacements resulting from inaccurately oriented images using the 3D DLT method.</p>
Full article ">
9805 KiB  
Article
Multi-Sensor As-Built Models of Complex Industrial Architectures
by Jean-François Hullo, Guillaume Thibault, Christian Boucheny, Fabien Dory and Arnaud Mas
Remote Sens. 2015, 7(12), 16339-16362; https://doi.org/10.3390/rs71215827 - 4 Dec 2015
Cited by 31 | Viewed by 7400
Abstract
In the context of increased maintenance operations and generational renewal work, a nuclear owner and operator, like Electricité de France (EDF), is invested in the scaling-up of tools and methods of “as-built virtual reality” for whole buildings and large audiences. In this paper, [...] Read more.
In the context of increased maintenance operations and generational renewal work, a nuclear owner and operator, like Electricité de France (EDF), is invested in the scaling-up of tools and methods of “as-built virtual reality” for whole buildings and large audiences. In this paper, we first present the state of the art of scanning tools and methods used to represent a very complex architecture. Then, we propose a methodology and assess it in a large experiment carried out on the most complex building of a 1300-megawatt power plant, an 11-floor reactor building. We also present several developments that made possible the acquisition, processing and georeferencing of multiple data sources (1000+ 3D laser scans and RGB panoramic, total-station surveying, 2D floor plans and the 3D reconstruction of CAD as-built models). In addition, we introduce new concepts for user interaction with complex architecture, elaborated during the development of an application that allows a painless exploration of the whole dataset by professionals, unfamiliar with such data types. Finally, we discuss the main feedback items from this large experiment, the remaining issues for the generalization of such large-scale surveys and the future technical and scientific challenges in the field of industrial “virtual reality”. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Review of three breakthrough projects (1993–2014) of as-built reconstruction from laser scanning data of industrial facilities at Electricité de France (EDF) [<a href="#B1-remotesensing-07-15827" class="html-bibr">1</a>]: (<b>a</b>) first water tank (1993); (<b>b</b>) first turbine hall (2009); (<b>c</b>) first reactor building (2014).</p>
Full article ">Figure 2
<p>Global framework of as-built data production in the experiment. White boxes are the initial data sources; grey boxes are intermediate data versions; and black boxes are the datasets that composed the virtual replica of a complex building. Orange outlines represent stronger control procedures in the data production process.</p>
Full article ">Figure 3
<p>Specific issues of a power facility: (<b>a</b>) rooms are often very congested; (<b>b</b>) objects can be very dark and (<b>c</b>) stainless objects can be clean and then very reflective; (<b>d</b>) often coexisting in one place.</p>
Full article ">Figure 4
<p>Four types of data are required to represent a power plant facility with a high degree of confidence (geometry and appearance): panoramic images, laser scans, floor maps and 3D CAD model.</p>
Full article ">Figure 5
<p>(<b>a</b>) Scale of the reactor in a power plant facility, a cylindrical building with 11 floors; (<b>b</b>) 1000+ TLS stations and targets for referencing appear as a very complex 3D network: green spheres are reference points; red spheres are TLS stations; and blue lines are the sights of registration targets from TLS stations.</p>
Full article ">Figure 6
<p>Key numbers in the multi-sensor scanning survey of a nuclear reactor building [<a href="#B1-remotesensing-07-15827" class="html-bibr">1</a>]; the main order of magnitude that may be highlighted is the number of stations: more than 1000 stations (for more details, see [<a href="#B35-remotesensing-07-15827" class="html-bibr">35</a>]). The relative time allocation is shown on the graph; CAD reconstruction is the main budget.</p>
Full article ">Figure 7
<p>An example of blunder removal in a 3D scan. Blue: 3D points filtered automatically. Red: 3D points manually segmented (the average segmentation time is 6.5 minutes per station).</p>
Full article ">Figure 8
<p>Typical issues in panoramic images. A lack of texture can lead to glitches in the final image (<b>Left</b>). The estimated vertical may have to be refined manually because horizon estimation algorithms can fail up to 15° (<b>Right</b>).</p>
Full article ">Figure 9
<p>2D as-built floor maps (<b>a</b>) updated using the 3D as-built dataset, laser scans and panoramic images; (<b>b</b>) An interactive Virtual Reality Modeling Language (VRML) visualization of a semantic, geometric and topological representation of the building, based on the as-built floor maps [<a href="#B1-remotesensing-07-15827" class="html-bibr">1</a>].</p>
Full article ">Figure 10
<p>The dense data sources used for as-built reconstruction of the nuclear reactor building from point clouds: (<b>a</b>) 1025 panoramic images with 450 million of pixels; (<b>b</b>) 1085 laser scans with 40 million 3D points; (<b>c</b>) 3D reconstruction of 25 types of objects with specific rules (fitting tolerances and naming encodings) and (<b>d</b>) as-built CAD model with 100 rooms and 100,000 3D objects.</p>
Full article ">Figure 11
<p>Spherical image and coordinate axes.</p>
Full article ">Figure 12
<p>User interface of the partial pose estimation tool. The camera position, orientation and field of view of the panoramic image are synchronized with the floor map (<b>Top</b>). A central cross helps the sighting of landmarks; Key numbers of the corrections using 2D maps (1025 images) (<b>Bottom</b>).</p>
Full article ">Figure 13
<p>User interface and procedure for fully estimating the pose of panoramic images in reference to a 3D model (<b>Top</b>); Equator, meridians and central dots help with the alignment and correction of parameters. Zoom on specific details (before/results) (<b>Middle</b>); Key numbers in the corrections using the 3D model (1025 images) (<b>Bottom</b>).</p>
Full article ">Figure 14
<p>Examples of user interfaces that can be found today for virtual tours: Leica TrueView, CSA VirtualTours, Kolor PanoTour, Faro WebShare.</p>
Full article ">Figure 15
<p>Multi-source data exploration and navigation in a complex building: visualization and station-based transitions of (<b>a</b>) panoramic images and (<b>b</b>) laser scans; (<b>c</b>) 2D map view of all stations and synchronized mini-maps and (<b>d</b>) first-person pedestrian navigation in the 3D model, including climbing ladders, taking stairs and crouching.</p>
Full article ">Figure 16
<p>A procedure for improving the quality of distance measurement in laser scans. The user must (<b>a</b>) choose between measurement types and (<b>b</b>) control his/her own measurement by a double picking; then (<b>c</b>) check the result in “bubble view” and finally (<b>d</b>) in the 3D model.</p>
Full article ">Figure 17
<p>3D pedestrian path (using navmeshes) overlay on panoramic images.</p>
Full article ">
1379 KiB  
Article
Data Product Specification Proposal for Architectural Heritage Documentation with Photogrammetric Techniques: A Case Study in Brazil
by Mariana B. Campos, Antonio M. G. Tommaselli, Ivana Ivánová and Roland Billen
Remote Sens. 2015, 7(10), 13337-13363; https://doi.org/10.3390/rs71013337 - 12 Oct 2015
Cited by 13 | Viewed by 6306
Abstract
Photogrammetric documentation can provide a sound database for the needs of architectural heritage preservation. However, the major part of photogrammetric documentation production is not used for subsequent architectural heritage projects, due to lack of knowledge of photogrammetric documentation accuracy. In addition, there are [...] Read more.
Photogrammetric documentation can provide a sound database for the needs of architectural heritage preservation. However, the major part of photogrammetric documentation production is not used for subsequent architectural heritage projects, due to lack of knowledge of photogrammetric documentation accuracy. In addition, there are only a few studies with rigorous analysis of the requirements for photogrammetric documentation of architectural heritage. In particular, requirements focusing on the geometry of the models generated by fully digital photogrammetric processes are missing. Considering these needs, this paper presents a procedure for architectural heritage documentation with photogrammetric techniques based on a previous review of existing standards of architectural heritage documentation. The data product specification proposed was elaborated conforming to ISO 19131 recommendations. We present the procedure with two case studies in the context of Brazilian architectural heritage documentation. Quality analysis of the produced models were performed considering ISO 19157 elements, such as positional accuracy, logical consistency and completeness, meeting the requirements. Our results confirm that the proposed requirements for photogrammetric documentation are viable. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Brazilian cultural heritage documentation-content and structure.</p>
Full article ">Figure 2
<p>Topological errors examples: (<b>a</b>) undershoot, (<b>b</b>) overshoot, (<b>c</b>) self-intersection, (<b>d</b>) overlap.</p>
Full article ">Figure 3
<p>(<b>a</b>) Symmetric radial lens distortion effect and (<b>b</b>) Terrestrial calibration field.</p>
Full article ">Figure 4
<p>Presidente Prudente Railway Station model.</p>
Full article ">Figure 5
<p>Checkpoints distribution and residues.</p>
Full article ">Figure 6
<p>Prudente de Morais model.</p>
Full article ">
5593 KiB  
Article
Assessment and Calibration of a RGB-D Camera (Kinect v2 Sensor) Towards a Potential Use for Close-Range 3D Modeling
by Elise Lachat, Hélène Macher, Tania Landes and Pierre Grussenmeyer
Remote Sens. 2015, 7(10), 13070-13097; https://doi.org/10.3390/rs71013070 - 1 Oct 2015
Cited by 183 | Viewed by 18078
Abstract
In the last decade, RGB-D cameras - also called range imaging cameras - have known a permanent evolution. Because of their limited cost and their ability to measure distances at a high frame rate, such sensors are especially appreciated for applications in robotics [...] Read more.
In the last decade, RGB-D cameras - also called range imaging cameras - have known a permanent evolution. Because of their limited cost and their ability to measure distances at a high frame rate, such sensors are especially appreciated for applications in robotics or computer vision. The Kinect v1 (Microsoft) release in November 2010 promoted the use of RGB-D cameras, so that a second version of the sensor arrived on the market in July 2014. Since it is possible to obtain point clouds of an observed scene with a high frequency, one could imagine applying this type of sensors to answer to the need for 3D acquisition. However, due to the technology involved, some questions have to be considered such as, for example, the suitability and accuracy of RGB-D cameras for close range 3D modeling. In that way, the quality of the acquired data represents a major axis. In this paper, the use of a recent Kinect v2 sensor to reconstruct small objects in three dimensions has been investigated. To achieve this goal, a survey of the sensor characteristics as well as a calibration approach are relevant. After an accuracy assessment of the produced models, the benefits and drawbacks of Kinect v2 compared to the first version of the sensor and then to photogrammetry are discussed. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Time-of-Flight technology among non-contact 3D measuring methods. A more detailed taxonomy of active vision techniques can be found in [<a href="#B11-remotesensing-07-13070" class="html-bibr">11</a>].</p>
Full article ">Figure 2
<p>Schematic representation of the output data of Kinect v2 and summary of point cloud computation.</p>
Full article ">Figure 3
<p>Visualization of color-coded standard deviation calculated for each pixel over (<b>a</b>) 10 successive frames, and (<b>b</b>) 100 successive frames; (<b>c</b>) Observation of standard deviations along a line of pixels (see red line) in the set of 10 frames (orange) and 100 frames (blue).</p>
Full article ">Figure 4
<p>(<b>a</b>) Checkerboard of grey levels inspired from [<a href="#B38-remotesensing-07-13070" class="html-bibr">38</a>], (<b>b</b>) intensity values measured on this checkerboard, and (<b>c</b>) corresponding 3D representation of distance measurements variations (in mm).</p>
Full article ">Figure 5
<p>Outdoor acquisitions: (<b>a</b>) Picture of the observed scene; (<b>b</b>) Corresponding point cloud acquired with the sensor, (<b>c</b>) and the same entire point cloud in a profile view (scene in a black rectangle) without removal of the “flying pixels” (red).</p>
Full article ">Figure 6
<p>(<b>a</b>) Picture of the experimental setup with the observed area in dotted lines; (<b>b</b>) Global schema of the acquisition protocol.</p>
Full article ">Figure 7
<p>(<b>a</b>) Deviations (in cm) between true distances and measured distances, as a function of the range (m). (<b>b</b>) Evolution of the standard deviation (in mm) calculated over each sample of 50 measurements, as a function of the range (m).</p>
Full article ">Figure 8
<p>Point cloud of residuals (mm) in red, at 0.8 m, together with the surface (color-coded) fitting the residuals.</p>
Full article ">Figure 9
<p>(<b>a</b>) Remaining deviations (in mm) between real and corrected distances, as a function of the range (m); Colorized representation of the residuals (in mm) with respect to a fitted plane (range: 1.25 m), (<b>b</b>) before correction and (<b>c</b>) after correction of local deformations.</p>
Full article ">Figure 10
<p>(<b>a</b>) Object under study (a sandstone balustrade fragment); and (<b>b</b>) reference mesh realized after acquisition with a measuring arm.</p>
Full article ">Figure 11
<p>Processing chain applied in this study for 3D reconstruction based on Kinect, from the acquisition of point clouds to the mesh creation.</p>
Full article ">Figure 12
<p>(<b>a</b>) Comparison between Kinect-based point cloud and reference mesh; and (<b>b</b>) comparison between mesh created from Kinect point cloud and reference mesh. The deviations are in millimeters.</p>
Full article ">
1332 KiB  
Article
Automatic Geometry Generation from Point Clouds for BIM
by Charles Thomson and Jan Boehm
Remote Sens. 2015, 7(9), 11753-11775; https://doi.org/10.3390/rs70911753 - 14 Sep 2015
Cited by 177 | Viewed by 16978
Abstract
The need for better 3D documentation of the built environment has come to the fore in recent years, led primarily by city modelling at the large scale and Building Information Modelling (BIM) at the smaller scale. Automation is seen as desirable as it [...] Read more.
The need for better 3D documentation of the built environment has come to the fore in recent years, led primarily by city modelling at the large scale and Building Information Modelling (BIM) at the smaller scale. Automation is seen as desirable as it removes the time-consuming and therefore costly amount of human intervention in the process of model generation. BIM is the focus of this paper as not only is there a commercial need, as will be shown by the number of commercial solutions, but also wide research interest due to the aspiration of automated 3D models from both Geomatics and Computer Science communities. The aim is to go beyond the current labour-intensive tracing of the point cloud to an automated process that produces geometry that is both open and more verifiable. This work investigates what can be achieved today with automation through both literature review and by proposing a novel point cloud processing process. We present an automated workflow for the generation of BIM data from 3D point clouds. We also present quality indicators for reconstructed geometry elements and a framework in which to assess the quality of the reconstructed geometry against a reference. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of Point Cloud to IFC algorithm steps. (<b>a</b>) Load Point Cloud; (<b>b</b>) Segment the Floor and Ceiling Planes; (<b>c</b>) Segment the Walls and split them with Euclidean Clustering; (<b>d</b>) Build IFC Geometry from Point Cloud segments; (<b>e</b>) (Optional) Spatial reasoning to clean up erroneous geometry; (<b>f</b>) Write the IFC data to an IFC file.</p>
Full article ">Figure 2
<p>Example plot of minimum and maximum (black) and maximum segment (blue) coordinates on the points of a convex hull (red) calculated for a wall cluster point cloud (white).</p>
Full article ">Figure 3
<p>Example of corridor reconstruction without (blue) and with (red) spatial reasoning applied.</p>
Full article ">Figure 4
<p>E57 point cloud with each element classified by IFC type (loaded in CloudCompare with four of nine elements visible).</p>
Full article ">Figure 5
<p>Image of corridor (<b>left</b>) and resulting point cloud data collected with a Viametris Indoor Mobile Mapping System (<b>top</b> <b>right</b>) and Faro Focus 3D S (<b>bottom right</b>).</p>
Full article ">Figure 6
<p>Image of office (<b>left</b>) and point cloud data collected with a Viametris Indoor Mobile Mapping System (<b>top right</b>) and Faro Focus 3D S (<b>bottom right</b>).</p>
Full article ">Figure 7
<p>Plots of boundary placement of manual and automatically created geometry from the static scan data for (<b>a</b>) wall 4 and (<b>b</b>) wall 15 of the corridor dataset.</p>
Full article ">Figure 7 Cont.
<p>Plots of boundary placement of manual and automatically created geometry from the static scan data for (<b>a</b>) wall 4 and (<b>b</b>) wall 15 of the corridor dataset.</p>
Full article ">Figure 8
<p>Extracted walls (red) of the corridor from the static (green) and mobile (blue) point cloud data overlaid on the human-generated model (grey/white).</p>
Full article ">Figure 9
<p>Charts of calculated reconstruction quality for each wall in the reference human-created model of the corridor against the automated geometry from the two scan datasets.</p>
Full article ">Figure 10
<p>Noise that caused over-extension from the static corridor scan data shown against reference wall.</p>
Full article ">Figure 11
<p>Extracted walls (red) of the office from the static (green) and mobile (blue) point cloud data overlaid on the human-generated model (grey/white).</p>
Full article ">Figure 12
<p>Charts of calculated reconstruction quality for each wall in the reference human-created model of the office against the automated geometry from the two scan datasets.</p>
Full article ">
808 KiB  
Article
Geometrical Issues on the Structural Analysis of Transmission Electricity Towers Thanks to Laser Scanning Technology and Finite Element Method
by Borja Conde, Alberto Villarino, Manuel Cabaleiro and Diego Gonzalez-Aguilera
Remote Sens. 2015, 7(9), 11551-11569; https://doi.org/10.3390/rs70911551 - 10 Sep 2015
Cited by 19 | Viewed by 12622
Abstract
This paper presents a multidisciplinary approach to reverse engineering and structural analysis of electricity transmission tower structures through the combination of laser scanning systems and finite element methodology. The use of laser scanning technology allows the development of both drawings and highly accurate [...] Read more.
This paper presents a multidisciplinary approach to reverse engineering and structural analysis of electricity transmission tower structures through the combination of laser scanning systems and finite element methodology. The use of laser scanning technology allows the development of both drawings and highly accurate three-dimensional geometric models that reliably reproduce geometric reality of towers structures, detecting imperfections, and particularities of their assembly. Due to this, it is possible to analyze and quantify the effect of these imperfections in their structural behavior, taking into account the actual geometry obtained, different structural models, and load hypotheses proposed. The method has been applied in three different types of metal electricity transmission towers with high voltage lines located in Guadalajara (Spain) in order to analyze its structural viability to accommodate future increased loads with respect that which are currently subjected. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Detail of the bolted connections with the inclusion of the eccentricity present at the linkage between profiles; (<b>b</b>) vertical deviation of the main body of tower; (<b>c</b>) horizontal deviation in the members of the main body of tower.</p>
Full article ">Figure 2
<p>Scheme of main structural parts of transmission tower upon draws of one of the cases studies herein analyzed (Tower 1). Data was obtained from laser scanning technology according to the methodology presented in <a href="#sec3-remotesensing-07-11551" class="html-sec">Section 3</a>.</p>
Full article ">Figure 3
<p>Loads cases considered in structural analysis. (<b>a</b>) Hypothesis 1: wind; (<b>b</b>) Hypothesis 2: Imbalance tractions; (<b>c</b>) Hypothesis 3: electrical conductor failure.</p>
Full article ">Figure 4
<p>Transmission towers considered in this study: (<b>a</b>) tower 1; (<b>b</b>) tower 2; (<b>c</b>) tower 3.</p>
Full article ">Figure 5
<p>Geometrical CAD wireframe models of transmission towers analyzed: (<b>a</b>) tower 1; (<b>b</b>) tower 2; and (<b>c</b>) tower 3.</p>
Full article ">Figure 6
<p>Nodes considered for analysis of displacements according to the global axis directions in each tower and structural model analyzed. (<b>a</b>) tower 1; (<b>b</b>) tower 2; (<b>c</b>)tower 3.</p>
Full article ">Figure 7
<p>Detail of the different structural behavior of the Tower 1 under Load Case 3 in Model 2 (<b>a</b>) and Model 3 (<b>b</b>). Undeformed shape is shown in blue and deformade shape in red.</p>
Full article ">Figure 8
<p>Normal stresses distribution onto Tower 1 for different structural models. (<b>a</b>) Model 1, with ideal representation of the nodes and maximum stress below the yield limit of steel; (<b>b</b>) Model 3, including the improper executions of the nodes and maximum stress exceeding the yield limit of steel.</p>
Full article ">
3275 KiB  
Article
Scanning Photogrammetry for Measuring Large Targets in Close Range
by Shan Huang, Zuxun Zhang, Tao Ke, Min Tang and Xuan Xu
Remote Sens. 2015, 7(8), 10042-10077; https://doi.org/10.3390/rs70810042 - 7 Aug 2015
Cited by 13 | Viewed by 9323
Abstract
In close-range photogrammetry, images are difficult to acquire and organize primarily because of the limited field of view (FOV) of digital cameras when long focal lenses are used to measure large targets. To overcome this problem, we apply a scanning photography method that [...] Read more.
In close-range photogrammetry, images are difficult to acquire and organize primarily because of the limited field of view (FOV) of digital cameras when long focal lenses are used to measure large targets. To overcome this problem, we apply a scanning photography method that acquires images by rotating the camera in both horizontal and vertical directions at one station. This approach not only enlarges the FOV of each station but also ensures that all stations are distributed in order without coverage gap. We also conduct a modified triangulation according to the traits of the data overlapping among images from the same station to avoid matching all images with one another. This algorithm synthesizes the images acquired from the same station into synthetic images, which are then used to generate a free network. Consequently, we solve the exterior orientation elements of each original camera image in the free network and perform image matching among original images to obtain tie points. Finally, all original images are combined in self-calibration bundle adjustment with control points. The feasibility and precision of the proposed method are validated by testing it on two fields using 300 and 600 mm lenses. The results confirm that even with a small amount of control points, the developed scanning photogrammetry can steadily achieve millimeter scale accuracy at distances ranging from 40 m to 250 m. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of image matrix at one station (The matrix contains 6 rows and 10 columns).</p>
Full article ">Figure 2
<p>Rotation angles influenced as perspective projection.</p>
Full article ">Figure 3
<p>Calculation of rotation angles.</p>
Full article ">Figure 4
<p>Distribution of horizontal and vertical rotation angles; the focal length is 100 mm, and the camera format is <math display="inline"> <semantics> <mrow> <mn>36</mn> <mtext> mm</mtext> <mo>×</mo> <mn>24</mn> <mtext> mm</mtext> </mrow> </semantics> </math>. The rotation angles of the image at the bottom left corner and top right corner of the image matrix are <math display="inline"> <semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mo>−</mo> <mn>30</mn> <mo>°</mo> <mo>,</mo> <mo>−</mo> <mn>30</mn> <mo>°</mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mn>30</mn> <mo>°</mo> <mo>,</mo> <mn>30</mn> <mo>°</mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics> </math>, respectively. The set overlaps in horizontal and vertical directions are 80% and 60%, respectively.</p>
Full article ">Figure 5
<p>Synthetic images from image matrices with rotation angles calculated in different approaches; (<b>a</b>) is the synthetic image from the image matrix acquired using the same approach in determining the relative rotation angle between adjacent images; (<b>b</b>) is the synthetic image from the image matrix acquired with the improved method introduced in this paper; (<b>c</b>) is the image at the top left corner of the image matrix presented in <a href="#remotesensing-07-10042-f005" class="html-fig">Figure 5</a>a; (<b>d</b>) is the image at the top left corner of the image matrix presented in <a href="#remotesensing-07-10042-f005" class="html-fig">Figure 5</a>b.</p>
Full article ">Figure 6
<p>Station distribution: (<b>a</b>) station distribution of wide target measuring; (<b>b</b>) station distribution of narrow target measuring.</p>
Full article ">Figure 7
<p>Parts of the photo scanner</p>
Full article ">Figure 8
<p>Flow chart of data processing</p>
Full article ">Figure 9
<p>Model of image synthesis</p>
Full article ">Figure 10
<p>Quick view of test fields. ((<b>a</b>) flag platform of Wuhan University.; (<b>b</b>) high slope of Three Gorges Project permanent lock. The regions in the red rectangles show the measuring ranges.)</p>
Full article ">Figure 11
<p>Synthetic images: (<b>a</b>) synthetic image of one station for test field showed in <a href="#remotesensing-07-10042-f010" class="html-fig">Figure 10</a>a; (<b>b</b>) synthetic image of one station for test field showed in <a href="#remotesensing-07-10042-f010" class="html-fig">Figure 10</a>b.</p>
Full article ">Figure 12
<p>Synthetic images matching. (<b>a</b>,<b>b</b>) shows the matching results of synthetic images from the first and second station for test field in <a href="#remotesensing-07-10042-f010" class="html-fig">Figure 10</a>a; (<b>c</b>,<b>d</b>) illustrates part of the results.</p>
Full article ">Figure 13
<p>Synthetic images matching. (<b>a</b>,<b>b</b>) shows the matching results of synthetic images from the first and second station for test field in <a href="#remotesensing-07-10042-f010" class="html-fig">Figure 10</a>b; (<b>c</b>,<b>d</b>) illustrates part of the results.</p>
Full article ">Figure 14
<p>Original images matching. (<b>a</b>,<b>b</b>) shows the matching results of images from the adjacent stations as measuring the first field showed in <a href="#remotesensing-07-10042-f010" class="html-fig">Figure 10</a>a; (<b>c</b>,<b>d</b>) illustrates part of the results.</p>
Full article ">Figure 15
<p>Original images matching. (<b>a</b>,<b>b</b>) shows the matching results of adjacent images in the same row of one image matrix as measuring the first field showed in <a href="#remotesensing-07-10042-f010" class="html-fig">Figure 10</a>a; (<b>c</b>,<b>d</b>) illustrates part of the results.</p>
Full article ">Figure 16
<p>Results of stereo images matching. (<b>a</b>,<b>b</b>) shows the matching results of images from the adjacent stations as measuring the second field showed in <a href="#remotesensing-07-10042-f010" class="html-fig">Figure 10</a>b; (<b>c</b>,<b>d</b>) illustrates part of the results.</p>
Full article ">Figure 17
<p>Results of stereo images matching. (<b>a</b>,<b>b</b>) shows the matching results of adjacent images in the same row of one image matrix as measuring the second field showed in <a href="#remotesensing-07-10042-f010" class="html-fig">Figure 10</a>b; (<b>c</b>,<b>d</b>) illustrates part of the results.</p>
Full article ">Figure 18
<p>Point clouds. (<b>a</b>,<b>b</b>) shows the point clouds before and after self-calibration bundle adjustment with control points in the experiment of the first field showed in <a href="#remotesensing-07-10042-f010" class="html-fig">Figure 10</a>a.</p>
Full article ">Figure 19
<p>Point clouds. (<b>a</b>,<b>b</b>) shows the point clouds before and after bundle adjustment in the experiment of the second field showed in <a href="#remotesensing-07-10042-f010" class="html-fig">Figure 10</a>b.</p>
Full article ">Figure 20
<p>Error vectors of control points and check points. (<b>a</b>) is the error vectors of points measured at a distance of 40 m; (<b>b</b>) and (<b>c</b>) denote error vectors of points measured at a photo distance of 80 m. (b) is the result with 12 control points, and (c) is 10 control points; (<b>d</b>) and (<b>e</b>) show error vectors at a photo distance of 150 m, and they are the results with 12 control points and eight control points, respectively.</p>
Full article ">Figure 21
<p>Error vectors of control points and check points. (<b>a</b>), (<b>b</b>), and (<b>c</b>) denote error vectors of points with horizontal overlap of 80%, 60%, and 30%, respectively. The coordinate residuals of check points in the experiments with image horizontal overlap 80%, 60%, and 30%, respectively.</p>
Full article ">Figure 22
<p>Error vectors of control points and check points. (<b>a</b>) and (<b>b</b>) denote error vectors of points in the experiments of obtaining images from three and five stations Coordinates residuals of check points in the experiments with <span class="html-italic">M</span> as 3 and 5, respectively.</p>
Full article ">Figure 23
<p>Error vectors of control points and check points. (<b>a</b>) and (<b>b</b>) demonstrate error vectors of points in the experiments with designed intersection angle as 26<math display="inline"> <semantics> <mo>°</mo> </semantics> </math>and 35<math display="inline"> <semantics> <mo>°</mo> </semantics> </math>.</p>
Full article ">Figure 24
<p>Error vectors of control points and check points. (<b>a</b>) and (<b>b</b>) demonstrate error vectors of points in the experiments with 300 and 600 mm focal lens.</p>
Full article ">
12786 KiB  
Article
Optimized 3D Street Scene Reconstruction from Driving Recorder Images
by Yongjun Zhang, Qian Li, Hongshu Lu, Xinyi Liu, Xu Huang, Chao Song, Shan Huang and Jingyi Huang
Remote Sens. 2015, 7(7), 9091-9121; https://doi.org/10.3390/rs70709091 - 17 Jul 2015
Cited by 15 | Viewed by 8494
Abstract
The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are [...] Read more.
The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM) reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE) of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Driving recorder and recorded image. (<b>a</b>) Photo of one type of driving recorder obtained from the Internet. (<b>b</b>) Test data recorded by the driving recorder in this paper.</p>
Full article ">Figure 2
<p>The pipeline of 3D reconstruction from driving recorder data. The grey frames show the typical SfM process. The two orange frames are the main improvement steps proposed in this paper.</p>
Full article ">Figure 3
<p>Example of samples and classifiers.</p>
Full article ">Figure 4
<p>Photographic model of driving recorder. (<b>a</b>) Integrated photographic model of driving recorder. (<b>b</b>) Side view of model. (<b>c</b>) Partial enlargement of side view model. The oblique image plane is the driving recorder image plane. Point O is the projective center, and <math display="inline"> <semantics> <mrow> <mtext>O″</mtext> </mrow> </semantics> </math> is the principal point on the driving recorder image plane. The focal length f is <math display="inline"> <semantics> <mrow> <mtext>OO″</mtext> </mrow> </semantics> </math>. Point <math display="inline"> <semantics> <mrow> <mtext>O′</mtext> </mrow> </semantics> </math> is the principal point on the virtual vertical image plane. Line OE is perpendicular to the ground. Point E is the intersection point of the ground and line OE. The plane <math display="inline"> <semantics> <mrow> <mtext>M′O′</mtext> </mrow> </semantics> </math>OOEF can be drawn perpendicular to both the image plane and the ground. <math display="inline"> <semantics> <mrow> <mtext>OM′</mtext> </mrow> </semantics> </math> is perpendicular to <math display="inline"> <semantics> <mrow> <mtext>M′J′</mtext> </mrow> </semantics> </math>, and OM is perpendicular to MJ. Line LN is perpendicular to OE. MP is a vertical line for the ground, and P is the intersection point of line MP and line ON. Line <math display="inline"> <semantics> <mrow> <mtext>M″</mtext> </mrow> </semantics> </math>T is perpendicular to <math display="inline"> <semantics> <mrow> <mtext>O′O</mtext> </mrow> </semantics> </math>. The angle between the oblique plane and the vertical plane is <math display="inline"> <semantics> <mtext>θ</mtext> </semantics> </math>. Angles MON and <math display="inline"> <semantics> <mrow> <mo> </mo> <mtext>O″ON″</mtext> </mrow> </semantics> </math> are <math display="inline"> <semantics> <mtext>α</mtext> </semantics> </math> and <math display="inline"> <semantics> <mtext>β</mtext> </semantics> </math>, respectively.</p>
Full article ">Figure 5
<p>(<b>a</b>) and (<b>b</b>) depictions of the box marking drawing method. (<b>c</b>) Example of box marking in an image. The principal point <math display="inline"> <semantics> <mtext>O″</mtext> </semantics> </math> is the center point of the image, and the black rectangle KJAB is the vehicle back surface in the image plane, which are detected by the classifier described in <a href="#sec2dot1-remotesensing-07-09091" class="html-sec">Section 2.1</a>. Point V is the vanishing point in the image. Line l is the perpendicular bisector of the image passing through principal point <math display="inline"> <semantics> <mtext>O″</mtext> </semantics> </math>. Line K<math display="inline"> <semantics> <mtext>M″</mtext> </semantics> </math> is parallel to the x axis of the image and <math display="inline"> <semantics> <mtext>M″</mtext> </semantics> </math> is the intersection point on l. Line <math display="inline"> <semantics> <mtext>N″</mtext> </semantics> </math>Q intersects lines VK and VJ at points Q and C, respectively. <math display="inline"> <semantics> <mtext>N″</mtext> </semantics> </math>Q is parallel with <math display="inline"> <semantics> <mtext>M″</mtext> </semantics> </math>K. Line QD intersects line VA at point D, and line DE intersects line VB at point E. Line QC and DE are parallel to the x axis and QD is parallel to the y axis of the image.</p>
Full article ">Figure 6
<p>Guardrails detection process. (<b>a</b>) Detection results of a specially-designed guardrail-classifier which could detect thousands of results, including not only correct guardrails but many wrong detection regions as well. (<b>b</b>) Example of how to draw the red lines from the vanishing point to the detection regions. (<b>c</b>) Results of red lines drawn from the vanishing point to each centre line of the rectangle regions at an interval of <math display="inline"> <semantics> <mrow> <mn>2</mn> <mo>°</mo> </mrow> </semantics> </math>. An example of a rectangle region’s centre line is shown in the bottom left corner of the (<b>c</b>); and (<b>d</b>) is an example of an intersection angle between the top and bottom edges of the guardrail.</p>
Full article ">Figure 7
<p>Guardrail location method. (<b>a</b>) Example of four triangle regions which included the angle of <math display="inline"> <semantics> <mrow> <mn>15</mn> <mo>°</mo> </mrow> </semantics> </math> and the fixed vertex (the vanishing point). (<b>b</b>) Triangle region that had the largest line numbers. (<b>c</b>) Final detection results of guardrail location method.</p>
Full article ">Figure 8
<p>Blocked vehicles detection method (guardrail region broadening method). Two vehicles running in opposite directions are missed detection by the vehicle classifier, which are indicated by the yellow arrows. These missed detection vehicle regions are included in the broadened guardrail regions, which are shown as the red region.</p>
Full article ">Figure 9
<p>SIFT feature points removing results. (<b>a</b>) Original SIFT feature points set on image. (<b>b</b>) Mask results, which show the masked out features on the vehicle and guardrail regions.</p>
Full article ">Figure 10
<p>RMSEs of each image. The X-axis represents the serial number of the image pairs and the Y-axis represents the RMSEs, which are shown as millimeters. The blue and red lines show the RMSEs of the typical method and our method, respectively. The correspondences in our method were matched after removing the SIFT features on the Mask, and then the outliers were eliminated by the epipolar constraint (QDEGSAC) method. In the typical method, the correspondences were filtered only by the epipolar constraint (QDEGSAC) method.</p>
Full article ">Figure 11
<p>Explanation of the reconstructed camera-pose-triangle and driving tracks. (<b>a</b>) Colored triangle represents the position of the recovered image and the camera projective center. The size of the triangle is followed by the size of the image data. (<b>b</b>) Red line represents the recovered vehicle driving tracks that carried recorder 1. The colored triangles are the reconstructed results that represent the position of the images taken by recorder 1.</p>
Full article ">Figure 12
<p>The recovered image positions of Set 1. These images were taken by recorders 4 and 5, which had the same exposure interval and were mounted on one vehicle. (<b>a</b>) and (<b>b</b>) are the recovered results from the same data with different methods. (a) depicts the reconstruction by the typical SfM method. The recovered images in the red rectangle of (a) are unordered obviously. (b) depicts the reconstruction by our method (features on vehicles and guardrails were masked out before matching and reconstruction). (<b>c</b>) is not a georeferenced result. We manually scaled the results of (b) and put it on the Google satellite map to help readers visualize the rough locations of the image sequences on roundabout.</p>
Full article ">Figure 13
<p>The recovered image positions of Set 2. These images were taken by recorders 1, 2, 3, and 4 mounted on their respective vehicles. (<b>a</b>) Reconstruction by the typical SfM method. The recovered disordered images in the red rectangles of (a) were recorded by recorder 4. (<b>b</b>) is not a georeferenced result. We manually scaled the results of (a) and put it on the Google satellite map. Based on the enlargement in (a) and the visualized rough location in (b), it can be seen that they were reconstructed in the wrong place. (<b>c</b>) Reconstruction by our method (features on vehicles and guardrails were masked out before matching and reconstruction). The recovered triangles of recorder 4 are smaller than the others because the sizes of the images taken by recorder 4 were smaller than those of the other recorders, which is reflected in (c) by the different reconstructed sizes of the triangles. (a) and (c) are the recovered results from the same data using different methods.</p>
Full article ">Figure 14
<p>The recovered image positions of Set 3. These images were taken by recorders 1–5. (<b>a</b>) and (<b>b</b>) are the recovered results from the same data with different methods; (a) was reconstructed by the typical SfM method and (b) was reconstructed by our method (features on vehicles and guardrails were masked out before matching and reconstruction). The images in red rectangles in (a) were recovered in chaos. (<b>c</b>) is not a georeferenced result. We manually scaled the recovery results of our method and put it on the Google satellite map to help readers visualize the rough locations of the image sequences on roundabout.</p>
Full article ">Figure 15
<p>Main targets in the sparse point clouds reconstruction process. The two building models in (<b>a</b>) and (<b>b</b>) with red and yellow marks are the main reconstruction targets. (a) and (b) are the side and oblique bird’s-eye view of two buildings from Google Earth, respectively.</p>
Full article ">Figure 15 Cont.
<p>Main targets in the sparse point clouds reconstruction process. The two building models in (<b>a</b>) and (<b>b</b>) with red and yellow marks are the main reconstruction targets. (a) and (b) are the side and oblique bird’s-eye view of two buildings from Google Earth, respectively.</p>
Full article ">Figure 16
<p>Side view of main target reconstruction results with sparse point clouds. Each result was reconstructed with the same data of 311 images in Set 3. (<b>a</b>) Sparse point clouds reconstructed by Photosynth without any added processing. The building on the left marked in red was repetitively reconstructed. (<b>b</b>) Sparse point clouds reconstructed by VisualSFM with the typical method. The building on the right could not be reconstructed and should be positioned inside the yellow box. (<b>c</b>) Sparse point clouds reconstructed by VisualSFM with our method. The details of the differences between the typical method and our method are described in <a href="#sec2dot5-remotesensing-07-09091" class="html-sec">Section 2.5</a> but can be summarized by saying that our method removed the features on the Mask and matched the remaining feature points before reconstruction.</p>
Full article ">Figure 16 Cont.
<p>Side view of main target reconstruction results with sparse point clouds. Each result was reconstructed with the same data of 311 images in Set 3. (<b>a</b>) Sparse point clouds reconstructed by Photosynth without any added processing. The building on the left marked in red was repetitively reconstructed. (<b>b</b>) Sparse point clouds reconstructed by VisualSFM with the typical method. The building on the right could not be reconstructed and should be positioned inside the yellow box. (<b>c</b>) Sparse point clouds reconstructed by VisualSFM with our method. The details of the differences between the typical method and our method are described in <a href="#sec2dot5-remotesensing-07-09091" class="html-sec">Section 2.5</a> but can be summarized by saying that our method removed the features on the Mask and matched the remaining feature points before reconstruction.</p>
Full article ">Figure 17
<p>Vertical view of main target reconstruction results with sparse point clouds. Each result was reconstructed by same data of 311 images in Set 3. (<b>a</b>) Sparse point clouds reconstructed by Photosynth without any added processing. The result is chaos. Expecting the repetition we experienced as shown in <a href="#remotesensing-07-09091-f016" class="html-fig">Figure 16</a>, it can be clearly seen that not only was the left building repeatedly reconstructed, but the right building was as well. The repetitive reconstructions of the buildings are marked in red for the left building and the right building is in yellow. (<b>b</b>) Sparse point clouds reconstructed by VisualSFM with the typical method. The right building was missed which should be reconstructed inside the yellow mark. (<b>c</b>) Sparse point clouds reconstructed by VisualSFM with our method. The details between the typical method and our method are described in <a href="#sec2dot5-remotesensing-07-09091" class="html-sec">Section 2.5</a>, which can be summarized by saying that we removed the features on the Mask and matched the remaining feature points before reconstruction. (<b>d</b>) shows a more intuitive result. It is not a georeferenced result. We manually scaled the sparse point clouds of our method and put it on the Google satellite map, which can help readers visualize the high level of overlapping between the point clouds and the map, the rough relative positions of the two buildings, and the position of recovered images in roundabout.</p>
Full article ">Figure 17 Cont.
<p>Vertical view of main target reconstruction results with sparse point clouds. Each result was reconstructed by same data of 311 images in Set 3. (<b>a</b>) Sparse point clouds reconstructed by Photosynth without any added processing. The result is chaos. Expecting the repetition we experienced as shown in <a href="#remotesensing-07-09091-f016" class="html-fig">Figure 16</a>, it can be clearly seen that not only was the left building repeatedly reconstructed, but the right building was as well. The repetitive reconstructions of the buildings are marked in red for the left building and the right building is in yellow. (<b>b</b>) Sparse point clouds reconstructed by VisualSFM with the typical method. The right building was missed which should be reconstructed inside the yellow mark. (<b>c</b>) Sparse point clouds reconstructed by VisualSFM with our method. The details between the typical method and our method are described in <a href="#sec2dot5-remotesensing-07-09091" class="html-sec">Section 2.5</a>, which can be summarized by saying that we removed the features on the Mask and matched the remaining feature points before reconstruction. (<b>d</b>) shows a more intuitive result. It is not a georeferenced result. We manually scaled the sparse point clouds of our method and put it on the Google satellite map, which can help readers visualize the high level of overlapping between the point clouds and the map, the rough relative positions of the two buildings, and the position of recovered images in roundabout.</p>
Full article ">Figure 18
<p>The vertical view of the two planes. (<b>a</b>) shows the sparse point clouds reconstructed by VisualSFM with our method. The Plane 1 and 2 are target planes we fitted. (<b>b</b>) shows the position of the target wall-planes in Google Map. (<b>c</b>) shows the Plane 1 and 2 in street view. (<b>d</b>) Example of plane fitted result in vertical view. The red line respects the vertical view of the plane fitted by wall points, and the blue lines are examples of the distances between the plane and points.</p>
Full article ">

Review

Jump to: Research, Other

9041 KiB  
Review
A Survey of Algorithmic Shapes
by Ulrich Krispel, Christoph Schinko and Torsten Ullrich
Remote Sens. 2015, 7(10), 12763-12792; https://doi.org/10.3390/rs71012763 - 28 Sep 2015
Cited by 9 | Viewed by 10854
Abstract
In the context of computer-aided design, computer graphics and geometry processing, the idea of generative modeling is to allow the generation of highly complex objects based on a set of formal construction rules. Using these construction rules, a shape is described by a [...] Read more.
In the context of computer-aided design, computer graphics and geometry processing, the idea of generative modeling is to allow the generation of highly complex objects based on a set of formal construction rules. Using these construction rules, a shape is described by a sequence of processing steps, rather than just by the result of all applied operations: shape design becomes rule design. Due to its very general nature, this approach can be applied to any domain and to any shape representation that provides a set of generating functions. The aim of this survey is to give an overview of the concepts and techniques of procedural and generative modeling, as well as their applications with a special focus on archeology and architecture. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Compass-and-ruler operations have long been used in interactive procedural modeling. This Gothic window construction was created in the framework presented by W<span class="html-small-caps">olfgang</span> T<span class="html-small-caps">haller</span> <span class="html-italic">et al.</span> using direct manipulation without any code or graph editing [<a href="#B5-remotesensing-07-12763" class="html-bibr">5</a>].</p>
Full article ">Figure 2
<p>The presented generative description is able to produce a large variety of wedding rings. Features, like engravings, recesses, different materials, unusual forms and gems, can be created and customized.</p>
Full article ">Figure 3
<p>The Helix Bridge is a pedestrian bridge in the Marina Bay area in Singapore. Its generative design has been optimized numerically. Furthermore, the bridge was fully modeled in order to visualize its form and geometrical compatibility, as well as to visualize the pedestrian experience on the bridge.</p>
Full article ">Figure 4
<p>The vase on the left-hand side is a digitized artifact of the “Museum Eggenberg” collection. It consists of 364,774 vertices and 727,898 triangles. The example of a procedural shape on the right-hand side takes two points <span class="html-italic">R</span> and <span class="html-italic">T</span> in 3D and distance values, which define the control vertices of a Bézier curve.</p>
Full article ">Figure 5
<p>Gothic architecture is defined by strict rules with its characteristics. The generative description of Gothic cathedrals encodes these building blocks and the rules on how to combine them. These building blocks have been created by M<span class="html-small-caps">ichael</span> C<span class="html-small-caps">urry</span>, <a href="http://www.thingiverse.com/thing:2030" target="_blank">http://www.thingiverse.com/thing:2030</a>.</p>
Full article ">

Other

Jump to: Research, Review

7801 KiB  
Technical Note
A Multi-Data Source and Multi-Sensor Approach for the 3D Reconstruction and Web Visualization of a Complex Archaelogical Site: The Case Study of “Tolmo De Minateda”
by Jose Alberto Torres-Martínez, Marcello Seddaiu, Pablo Rodríguez-Gonzálvez, David Hernández-López and Diego González-Aguilera
Remote Sens. 2016, 8(7), 550; https://doi.org/10.3390/rs8070550 - 29 Jun 2016
Cited by 27 | Viewed by 7518
Abstract
The complexity of archaeological sites hinders creation of an integral model using the current Geomatic techniques (i.e., aerial, close-range photogrammetry and terrestrial laser scanner) individually. A multi-sensor approach is therefore proposed as the optimal solution to provide a 3D reconstruction and visualization of [...] Read more.
The complexity of archaeological sites hinders creation of an integral model using the current Geomatic techniques (i.e., aerial, close-range photogrammetry and terrestrial laser scanner) individually. A multi-sensor approach is therefore proposed as the optimal solution to provide a 3D reconstruction and visualization of these complex sites. Sensor registration represents a riveting milestone when automation is required and when aerial and terrestrial datasets must be integrated. To this end, several problems must be solved: coordinate system definition, geo-referencing, co-registration of point clouds, geometric and radiometric homogeneity, etc. The proposed multi-data source and multi-sensor approach is applied to the study case of the “Tolmo de Minateda” archaeological site. A total extension of 9 ha is reconstructed, with an adapted level of detail, by an ultralight aerial platform (paratrike), an unmanned aerial vehicle, a terrestrial laser scanner and terrestrial photogrammetry. Finally, a mobile device (e.g., tablet or smartphone) has been used to integrate, optimize and visualize all this information, providing added value to archaeologists and heritage managers who want to use an efficient tool for their works at the site, and even for non-expert users who just want to know more about the archaeological settlement. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Paratrike employed (<b>a</b>); detail of the stabilization platform “MUSAS” (<b>b</b>) and UAV used in complex areas (<b>c</b>).</p>
Full article ">Figure 2
<p>Multi-sensor and multi-data source workflow for the 3D reconstruction of complex archaeological sites.</p>
Full article ">Figure 3
<p>Example of image footprints projected over terrain (<b>white</b> line) and the different workspaces (<b>red</b> line). (<b>a</b>) Paratrike flight planned over the whole area and (<b>b</b>) UAV flight planned over “El Reguerón” area.</p>
Full article ">Figure 4
<p>TLS stations distribution (<b>red</b> circles), including the area surveyed with TLS during the different campaigns performed (first campaign in <b>orange</b> colour and second campaign in <b>blue</b> colour).</p>
Full article ">Figure 5
<p>An example of data fusion using Least Squares Matching (LSM) between the laser (<b>Left</b>) and photogrammetric (<b>Right</b>) point clouds.</p>
Full article ">Figure 6
<p>Aerial view of the study area “Tolmo de Minateda” (from [<a href="#B50-remotesensing-08-00550" class="html-bibr">50</a>]).</p>
Full article ">Figure 7
<p>Location of the defensive area “El Reguerón”.</p>
Full article ">Figure 8
<p>Multi-resolution 3D model of the archaeological settlement (<b>a</b>). Detailed 3D model of “El Reguerón” (<b>b</b>).</p>
Full article ">Figure 9
<p>Multi-resolution model of “El Reguerón” area (<b>a</b>) and detail for the wall (<b>b</b>).</p>
Full article ">Figure 10
<p>True orthoimages of the whole settlement (<b>a</b>) and defensive area “El Reguerón” (<b>b</b>).</p>
Full article ">Figure 11
<p>Archaeological sketch (<b>a</b>) overlapped with the 3D model (<b>b</b>).</p>
Full article ">Figure 12
<p>Analysis of discrepancies between the simplified-optimised 3D model and the original point cloud (<b>Left</b>) and a detailed comparison over the walled area (<b>Right</b>).</p>
Full article ">Figure 13
<p>“El Regueron” 3D model and its additional information loaded in a smartphone (Base model from: © Analytical Graphics, Inc., © CGIAR-CSI, Produced using Copernicus data and information funded by the European Union—EU-DEM layers, © Commonwealth of Australia (Geoscience Australia) 2012) [<a href="#B19-remotesensing-08-00550" class="html-bibr">19</a>].</p>
Full article ">
5260 KiB  
Technical Note
Large Scale Automatic Analysis and Classification of Roof Surfaces for the Installation of Solar Panels Using a Multi-Sensor Aerial Platform
by Luis López-Fernández, Susana Lagüela, Inmaculada Picón and Diego González-Aguilera
Remote Sens. 2015, 7(9), 11226-11248; https://doi.org/10.3390/rs70911226 - 1 Sep 2015
Cited by 15 | Viewed by 8550
Abstract
A low-cost multi-sensor aerial platform, aerial trike, equipped with visible and thermographic sensors is used for the acquisition of all the data needed for the automatic analysis and classification of roof surfaces regarding their suitability to harbor solar panels. The geometry of a [...] Read more.
A low-cost multi-sensor aerial platform, aerial trike, equipped with visible and thermographic sensors is used for the acquisition of all the data needed for the automatic analysis and classification of roof surfaces regarding their suitability to harbor solar panels. The geometry of a georeferenced 3D point cloud generated from visible images using photogrammetric and computer vision algorithms, and the temperatures measured on thermographic images are decisive to evaluate the areas, tilts, orientations and the existence of obstacles to locate the optimal zones inside each roof surface for the installation of solar panels. This information is complemented with the estimation of the solar irradiation received by each surface. This way, large areas may be efficiently analyzed obtaining as final result the optimal locations for the placement of solar panels as well as the information necessary (location, orientation, tilt, area and solar irradiation) to estimate the productivity of a solar panel from its technical characteristics. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Aerial trike and MUSAS platform used in this study; bottom-left corner: detail of the gyro-stabilized platform for the installation of sensors.</p>
Full article ">Figure 2
<p>Comparison between the products generated by the different used imaging sensors. (<b>Left</b>) RGB image. (<b>Right</b>) Thermographic image with the thermal values represented using a color map.</p>
Full article ">Figure 3
<p>(<b>Left</b>) Quadrant determination from the components of the vector normal to the plane. (<b>Right</b>) Absolute orientation of the roof, denoted as “<span class="html-italic">Or</span>”, from the angle between the projection of the vector on the horizontal plane and the cartesian <span class="html-italic">Y</span>-axis.</p>
Full article ">Figure 4
<p>Tilt angle from the values of the normal vector of the plane.</p>
Full article ">Figure 5
<p>Procedure to locate optimal zones inside optimal surfaces. (<b>Left</b>) 3D surface selected (red rectangle) over the roof. (<b>Right</b>) From top to bottom: (<b>a</b>) surface under evaluation, without obstacles, projected to the horizontal plane; (<b>b</b>) concave hull that defines the perimeter of the surface; (<b>c</b>) largest empty rectangles inside the concave hull and (<b>d</b>) evaluated surface inliers of the largest empty rectangles re-projected to 3D.</p>
Full article ">Figure 6
<p>Flight planning: (<b>Top</b>) Areas captured by each image shot for the thermographic sensor. (<b>Middle</b>) Flight planning for the navigation of the aerial trike. (<b>Bottom</b>) GPS track after the aerial trike flight with the position of each image shot.</p>
Full article ">Figure 7
<p>Hybrid 3D point cloud generated from images captured with the RGB camera integrating thermographic texture mapped over the roofs. Areas shown in detail in consecutive figures are remarked in red.</p>
Full article ">Figure 8
<p>Check points homogeneously distributed over the whole study area.</p>
Full article ">Figure 9
<p>(<b>a</b>) Result of the extraction of each roof using the Euclidean cluster extraction algorithm. (<b>b</b>) Results of the extraction of each planar surface using the RANSAC algorithm. (<b>c</b>) Details of the previous point cloud segmented and classified by its suitability to install solar panels. (<b>Green</b>) Surfaces suitable to install solar panels without supports. (<b>Yellow</b>) Surfaces suitable to install solar panels using a support to modify the solar incidence angle. (<b>Red</b>) Surfaces not suitable for the installation of solar panels.</p>
Full article ">Figure 10
<p>Detail of the point cloud segmented and classified after the thermographic analysis. (Black) Points removed by the statistical thermographic analysis. (Grey) Perimeter defined by a distance parameter around the obstacles detected to assist the optimal location of solar panels.</p>
Full article ">Figure 11
<p>Georeferenced point cloud of the detail zones after the process. Optimal surfaces (rectangles) for the installation of solar panels are highlighted in blue. Roof ID’s are used in <a href="#remotesensing-07-11226-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 12
<p>Tilt-orientation-factors (TOF) representation for Ávila (Spain) estimated by Solar 3D city.</p>
Full article ">Figure 13
<p>(<b>a</b>) Diagram of the RANSAC interpolation between two roof surfaces with an intersection angle next to 180°. (<b>b</b>) Top view of the evaluated roof. (<b>c</b>) Top view of the RANSAC algorithm results.</p>
Full article ">
Back to TopTop