[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (96)

Search Parameters:
Keywords = bathymetric LiDAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 7313 KiB  
Article
Shallow Water Bathymetry Inversion Based on Machine Learning Using ICESat-2 and Sentinel-2 Data
by Mengying Ye, Changbao Yang, Xuqing Zhang, Sixu Li, Xiaoran Peng, Yuyang Li and Tianyi Chen
Remote Sens. 2024, 16(23), 4603; https://doi.org/10.3390/rs16234603 - 7 Dec 2024
Viewed by 850
Abstract
Shallow water bathymetry is essential for maritime navigation, environmental monitoring, and coastal management. While traditional methods such as sonar and airborne LiDAR provide high accuracy, their high cost and time-consuming nature limit their application in remote and sensitive areas. Satellite remote sensing offers [...] Read more.
Shallow water bathymetry is essential for maritime navigation, environmental monitoring, and coastal management. While traditional methods such as sonar and airborne LiDAR provide high accuracy, their high cost and time-consuming nature limit their application in remote and sensitive areas. Satellite remote sensing offers a cost-effective and rapid alternative for large-scale bathymetric inversion, but it still relies on significant in situ data to establish a mapping relationship between spectral data and water depth. The ICESat-2 satellite, with its photon-counting LiDAR, presents a promising solution for acquiring bathymetric data in shallow coastal regions. This study proposes a rapid bathymetric inversion method based on ICESat-2 and Sentinel-2 data, integrating spectral information, the Forel-Ule Index (FUI) for water color, and spatial location data (normalized X and Y coordinates and polar coordinates). An automated script for extracting bathymetric photons in shallow water regions is provided, aiming to facilitate the use of ICESat-2 data by researchers. Multiple machine learning models were applied to invert bathymetry in the Dongsha Islands, and their performance was compared. The results show that the XG-CID and RF-CID models achieved the highest inversion accuracies, 93% and 94%, respectively, with the XG-CID model performing best in the range from −10 m to 0 m and the RF-CID model excelling in the range from −15 m to −10 m. Full article
(This article belongs to the Special Issue Artificial Intelligence for Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Map of the study area for this study. (<b>a</b>) Location of the study area for this study. (<b>b</b>) Sentinel-2 image map of Dongsha Islands. (<b>c</b>) Sentinel-2 image map of Lingshui-Sanya Bay; the red dots are the actual measurement points of water depth.</p>
Full article ">Figure 2
<p>The technical flowchart of this study. The blue dashed box illustrates the key steps in the ICESat-2 bathymetric photon extraction process.</p>
Full article ">Figure 3
<p>Noise and land photons were filtered using the YAPC algorithm. The right panel shows a zoomed-in view of the rectangular area in the image. (<b>a</b>) Photon signal density confidence map based on the YAPC algorithm, with red areas indicating high-confidence regions. (<b>b</b>) Water signal estimation map, where the red dots represent valid photon signals from the water surface and below.</p>
Full article ">Figure 4
<p>Reference lines for filtering noise and land photons. (<b>a</b>) The Otsu threshold method is used to automatically determine the minimum threshold for valid photon signals, with the red vertical line representing the threshold line for valid photon signals. (<b>b</b>) The estimated water surface height is obtained based on the histogram statistics of valid photon signals along the track, with the vertical black line indicating the estimated water surface height.</p>
Full article ">Figure 5
<p>Water depth map after refraction correction. The gray points represent uncorrected photon data, while the black points indicate refraction-corrected photon data. The blue points denote estimated water surface photons, and the red line represents the estimated seafloor.</p>
Full article ">Figure 6
<p>Plot of ICESat-2 bathymetric photon extraction data results. (<b>a</b>) Lingshui-Sanya Bay. (<b>b</b>) Dongsha Islands.</p>
Full article ">Figure 7
<p>Difference distribution map showing the distribution of depth differences between the measured points and the nearest ICESat-2 bathymetry points. The <span class="html-italic">X</span>-axis represents the sequence number of the measured points.</p>
Full article ">Figure 8
<p>Plot of inversion results based on four models for Dongsha Islands, where ‘-Bands’ represents bathymetric images inverted using spectral characteristic information, and ‘-CID’ represents bathymetric images inverted using comprehensive information. (<b>a</b>) Random Forest-Bands. (<b>b</b>) Gradient Boosting-Bands. (<b>c</b>) Polynomial Regression-Bands. (<b>d</b>) XGBoost-Bands. (<b>e</b>) Random Forest-CID. (<b>f</b>) Gradient Boosting-CID. (<b>g</b>) Polynomial Regression-CID. (<b>h</b>) XGBoost-CID. (<b>i</b>) Stumpf-BG. (<b>j</b>) Stumpf-BR. (<b>k</b>) Forel-Ule Index.</p>
Full article ">Figure 9
<p>Scatter plots, residual plots, and deviation distributions of predicted bathymetry versus ICESat-2 bathymetry values. (<b>a</b>) Random Forest-Bands. (<b>b</b>) Gradient Boosting-Bands. (<b>c</b>) Polynomial Regression-Bands. (<b>d</b>) XGBoost-Bands. (<b>e</b>) Random Forest-CID. (<b>f</b>) Gradient Boosting-CID. (<b>g</b>) Polynomial Regression-CID. (<b>h</b>) XGBoost-CID. (<b>i</b>) Stumpf-BG. (<b>j</b>) Stumpf-BR.</p>
Full article ">Figure 10
<p>The bar charts of performance evaluation metrics for each model across different depth ranges.</p>
Full article ">Figure 11
<p>SHAP analysis of feature contributions across depth intervals. The leftmost plot in each group represents the overall analysis, covering feature contribution analysis across all depth intervals, while the remaining plots correspond to different depth intervals. (<b>a</b>) Random Forest-CID. (<b>b</b>) Gradient Boosting-CID. (<b>c</b>) XGBoost-CID.</p>
Full article ">
13 pages, 1882 KiB  
Article
Coastline Bathymetry Retrieval Based on the Combination of LiDAR and Remote Sensing Camera
by Yicheng Liu, Tong Wang, Qiubao Hu, Tuanchong Huang, Anmin Zhang and Mingwei Di
Water 2024, 16(21), 3135; https://doi.org/10.3390/w16213135 - 1 Nov 2024
Viewed by 821
Abstract
This paper presents a Compact Integrated Water–Land Survey System (CIWS), which combines a remote sensing camera and a LiDAR module, and proposes an innovative underwater topography retrieval technique based on this system. This technique utilizes high-precision water depth points obtained from LiDAR measurements [...] Read more.
This paper presents a Compact Integrated Water–Land Survey System (CIWS), which combines a remote sensing camera and a LiDAR module, and proposes an innovative underwater topography retrieval technique based on this system. This technique utilizes high-precision water depth points obtained from LiDAR measurements as control points, and integrating them with the grayscale values from aerial photogrammetry images to construct a bathymetry retrieval model. This model can achieve large-scale bathymetric retrieval in shallow waters. Calibration of the UAV-mounted LiDAR system was conducted using laboratory and Dongjiang Bay marine calibration fields, with the results showing a laser depth measurement accuracy of up to 10 cm. Experimental tests near Miaowan Island demonstrated the generation of high-precision 3D seabed topographic maps for the South China Sea area using LiDAR depth data and remote sensing images. The study validates the feasibility and accuracy of this integrated scanning method for producing detailed 3D seabed topography models. Full article
(This article belongs to the Special Issue Application of Remote Sensing for Coastal Monitoring)
Show Figures

Figure 1

Figure 1
<p>Appearance of the CIWS system: (<b>a</b>) 3D model; (<b>b</b>) actual photograph.</p>
Full article ">Figure 2
<p>Simulation of different water depths based on 3D printed models.</p>
Full article ">Figure 3
<p>Schematic diagram of the distance measurement error calibration experiment.</p>
Full article ">Figure 4
<p>Processing flow of airborne LiDAR bathymetry data.</p>
Full article ">Figure 5
<p>Calibration field at Dongjiang Bay.</p>
Full article ">Figure 6
<p>Multibeam bathymetric survey experiment with the Norbit iWBMS and Dolphin-1 USV.</p>
Full article ">Figure 7
<p>Seafloor topography of Dongjiang Bay, obtained from multibeam data.</p>
Full article ">Figure 8
<p>Calibration experiment of the CIWS: (<b>a</b>) hexacopter UAV equipped with the CIWS system; (<b>b</b>) flight trajectory.</p>
Full article ">Figure 9
<p>Average error at different depths.</p>
Full article ">Figure 10
<p>High-precision 3D seabed topography retrieval of the Ganchcondo water area based on remote sensing images. (<b>a</b>) high-resolution multispectral satellite (<b>b</b>) 3D model of the underwater terrain.</p>
Full article ">Figure 11
<p>Comparison between simulated model water depths and measured water depths.</p>
Full article ">Figure 12
<p>Integrated measurement experiment on Miaowan Island.</p>
Full article ">Figure 13
<p>Aerial imagery data acquired by the CIWS system.</p>
Full article ">Figure 14
<p>Bathymetric retrieval results of the coastal area of water at Near Miaowan Island.</p>
Full article ">
13 pages, 13392 KiB  
Review
Evolution of Single Photon Lidar: From Satellite Laser Ranging to Airborne Experiments to ICESat-2
by John J. Degnan
Photonics 2024, 11(10), 924; https://doi.org/10.3390/photonics11100924 - 30 Sep 2024
Viewed by 1455
Abstract
In September 2018, NASA launched the ICESat-2 satellite into a 500 km high Earth orbit. It carried a truly unique lidar system, i.e., the Advanced Topographic Laser Altimeter System or ATLAS. The ATLAS lidar is capable of detecting single photons reflected from a [...] Read more.
In September 2018, NASA launched the ICESat-2 satellite into a 500 km high Earth orbit. It carried a truly unique lidar system, i.e., the Advanced Topographic Laser Altimeter System or ATLAS. The ATLAS lidar is capable of detecting single photons reflected from a wide variety of terrain (land, ice, tree leaves, and underlying terrain) and even performing bathymetric measurements due to its green wavelength. The system uses a single 5-watt, Q-switched laser producing a 10 kHz train of sub-nanosecond pulses, each containing 500 microjoules of energy. The beam is then split into three “strong” and three “weak” beamlets, with the “strong” beamlets containing four times the power of the “weak” beamlets in order to satisfy a wide range of Earth science goals. Thus, ATLAS is capable of making up to 60,000 surface measurements per second compared to the 40 measurements per second made by its predecessor multiphoton instrument, the Geoscience Laser Altimeter System (GLAS) on ICESat-1, which was terminated after several years of operation in 2009. Low deadtime timing electronics are combined with highly effective noise filtering algorithms to extract the spatially correlated surface photons from the solar and/or electronic background noise. The present paper describes how the ATLAS system evolved from a series of unique and seemingly unconnected personal experiences of the author in the fields of satellite laser ranging, optical antennas and space communications, Q-switched laser theory, and airborne single photon lidars. Full article
Show Figures

Figure 1

Figure 1
<p>The prototype NASA SLR2000 system projected a 2 kHz train of low energy, 532 nm, sub-nanosecond pulses via an unobscured 30 cm primary lens in order to concentrate more photons on the satellite while simultaneously eliminating potential eye hazards.</p>
Full article ">Figure 2
<p>Summary of airborne single photon lidars developed at Sigma Space Corporation.</p>
Full article ">Figure 3
<p>A rotating single wedge traces a circle on the terrain. Over longer ranges, the receiver array FOV will, due to the finite speed of light, become displaced along the circumference of the circle from the array of laser spots on the surface. Therefore, we often use an annular compensator wedge in which the transmitted laser beamlets pass unaffected through a small central hole and are deflected by the optical wedge (<b>a</b>) while the receiver array FOV is angularly displaced opposite to the direction of rotation so that the detector array is viewing the illuminated area when the photons return from the surface (<b>b</b>).</p>
Full article ">Figure 4
<p>Editing Noise (<b>a</b>) Unedited point cloud of Greenland terrain (surface reflectivity &gt; 0.9) shows a fair amount of solar noise above (red haze) and below (blue haze) the surface. (<b>b</b>) same point cloud image after use of Sigma-developed noise-editing filters.</p>
Full article ">Figure 5
<p>(<b>Top</b>) Lidar image of the Pacific coastline in Port Lobos, California. (<b>Bottom</b>) Detailed look along the blue line in the top figure showing a hilltop monastery, the heights of various trees, the surface of the Pacific Ocean, and the ocean bottom to a depth of 13 m (42.7 ft).</p>
Full article ">Figure 6
<p>Single photon lidar image of downtown Houston, Texas, taken by the Sigma HRQLS system (pronounced “Hercules”). Colors were arbitrarily assigned based on height above the ground.</p>
Full article ">Figure 7
<p>Side view of an airborne 3D image of a fire tower surrounded by a chain link fence and trees of varying height. Colors are arbitrarily assigned based on height above the surface.</p>
Full article ">Figure 8
<p>Sample surface data from a single channel (#6) of the 16-channel airborne MABELpush-broom lidar taken over the Greenland ice sheet from an altitude of 20 km [<a href="#B18-photonics-11-00924" class="html-bibr">18</a>]. The data demonstrated the lidar’s ability to observe a wide range of surface slopes in the presence of high-intensity solar (or detector) noise due to high spatial correlation of the surface counts and a low deadtime receiver.</p>
Full article ">
23 pages, 11057 KiB  
Article
Denoising of Photon-Counting LiDAR Bathymetry Based on Adaptive Variable OPTICS Model and Its Accuracy Assessment
by Peize Li, Yangrui Xu, Yanpeng Zhao, Kun Liang and Yuanjie Si
Remote Sens. 2024, 16(18), 3438; https://doi.org/10.3390/rs16183438 - 16 Sep 2024
Viewed by 747
Abstract
Spaceborne photon-counting LiDAR holds significant potential for shallow-water bathymetry. However, the received photon data often contain substantial noise, complicating the extraction of elevation information. Currently, a denoising algorithm named ordering points to identify the clustering structure (OPTICS) draws people’s attention because of its [...] Read more.
Spaceborne photon-counting LiDAR holds significant potential for shallow-water bathymetry. However, the received photon data often contain substantial noise, complicating the extraction of elevation information. Currently, a denoising algorithm named ordering points to identify the clustering structure (OPTICS) draws people’s attention because of its strong performance under high background noise. However, this algorithm’s fixed input variables can lead to inaccurate photon distribution parameters in areas near the water bottom, which results in inadequate denoising in these areas, affecting bathymetric accuracy. To address this issue, an Adaptive Variable OPTICS (AV-OPTICS) model is proposed in this paper. Unlike the traditional OPTICS model with fixed input variables, the proposed model dynamically adjusts input variables based on point cloud distribution. This adjustment ensures accurate measurement of photon distribution parameters near the water bottom, thereby enhancing denoising effects in these areas and improving bathymetric accuracy. The findings indicate that, compared to traditional OPTICS methods, AV-OPTICS achieves higher F1-values and lower cohesions, demonstrating better denoising performance near the water bottom. Furthermore, this method achieves an average MAE of 0.28 m and RMSE of 0.31 m, indicating better bathymetric accuracy than traditional OPTICS methods. This study provides a promising solution for shallow-water bathymetry based on photon-counting LiDAR data. Full article
(This article belongs to the Special Issue Satellite Remote Sensing for Ocean and Coastal Environment Monitoring)
Show Figures

Figure 1

Figure 1
<p>Study area and the detection tracks of ATLAS represented by six different colors of dashed lines.</p>
Full article ">Figure 2
<p>Water depth results of study area for ALB reference data.</p>
Full article ">Figure 3
<p>The elevation distribution histogram of ATL03 photon data.</p>
Full article ">Figure 4
<p>The contours of the water bottom terrain under different scenarios. (<b>a</b>) relatively flat water bottom terrains; (<b>b</b>) complex water bottom terrains.</p>
Full article ">Figure 5
<p>The distance of point <span class="html-italic">o</span> and <span class="html-italic">w</span> under the definition of OPTICS algorithm.</p>
Full article ">Figure 6
<p>The spatial geometric relationships of refraction correction under different slope angles <math display="inline"><semantics> <mrow> <mi>φ</mi> </mrow> </semantics></math>. The green and red vectors correspond to the original coordinate and corrected coordinate of water bottom photons, respectively [<a href="#B38-remotesensing-16-03438" class="html-bibr">38</a>].</p>
Full article ">Figure 7
<p>Denoising effects of our method and traditional OPTICS in different scenes. (<b>a</b>) 20190119gt3l—Raw data; (<b>b</b>) 20190119gt3l—Our method; (<b>c</b>) 20190119gt3l—Traditional OPTICS; (<b>d</b>) 20181024gt3r—Raw data; (<b>e</b>) 20181024gt3r—Our method; (<b>f</b>) 20181024gt3r—Traditional OPTICS; (<b>g</b>) 20200717gt3l—Raw data; (<b>h</b>) 20200717gt3l—Our method; (<b>i</b>) 20200717gt3l—Traditional OPTICS.</p>
Full article ">Figure 8
<p>Comparison of denoising details of the two methods in different scenarios. (<b>a</b>) 20181024gt3r—Our method; (<b>b</b>) 20181024gt3r—Traditional OPTICS; (<b>c</b>) 20190420gt2l—Our method; (<b>d</b>) 20190420gt2l—Traditional OPTICS.</p>
Full article ">Figure 8 Cont.
<p>Comparison of denoising details of the two methods in different scenarios. (<b>a</b>) 20181024gt3r—Our method; (<b>b</b>) 20181024gt3r—Traditional OPTICS; (<b>c</b>) 20190420gt2l—Our method; (<b>d</b>) 20190420gt2l—Traditional OPTICS.</p>
Full article ">Figure 9
<p>Coordinate correction and fitting profiles of the signal photons. (<b>a</b>) 20190119gt3l—Coordinate correction; (<b>b</b>) 20190119gt3l—Fitting profiles; (<b>c</b>) 20181024gt3r—Coordinate correction; (<b>d</b>) 20181024gt3r—Fitting profiles.</p>
Full article ">Figure 9 Cont.
<p>Coordinate correction and fitting profiles of the signal photons. (<b>a</b>) 20190119gt3l—Coordinate correction; (<b>b</b>) 20190119gt3l—Fitting profiles; (<b>c</b>) 20181024gt3r—Coordinate correction; (<b>d</b>) 20181024gt3r—Fitting profiles.</p>
Full article ">Figure 10
<p>Bathymetric accuracy validation and comparison of our method and traditional OPTICS. (<b>a</b>,<b>b</b>) on the first row correspond to 20190119gt3l, while (<b>c</b>,<b>d</b>) on the second row correspond to 20190420gt2l. The red and black points represent the bathymetric results of the corresponding method and in situ data, respectively. (<b>a</b>) 20190119gt3l—Our method; (<b>b</b>) 20190119gt3l—Traditional OPTICS; (<b>c</b>) 20190420gt2l—Our method; (<b>d</b>) 20190420gt2l—Traditional OPTICS.</p>
Full article ">Figure 11
<p>Idealized model of elliptic filter in vertical direction. The water bottom contour within the black block is regarded as a gray rectangle. The yellow, red, blue, and green lines are the idealized elliptical filters with different lengths of semi-minor axis, respectively.</p>
Full article ">Figure 12
<p>Idealized model of elliptic filter in horizontal direction. The water bottom contour within the black block is regarded as a gray rectangle. The yellow and red lines are the idealized elliptical filters with different lengths of semi-major axis, respectively.</p>
Full article ">Figure 13
<p>Deviation percentage between ICESat-2 results and ALB in situ data, with an interval of 0.5 m for each histogram column. (<b>a</b>,<b>b</b>) on the first row correspond to 20190119gt3l, while (<b>c</b>,<b>d</b>) on the second row correspond to 20201016gt2l. (<b>a</b>) 20190119gt3l—Our method; (<b>b</b>) 20190119gt3l—Traditional OPTICS; (<b>c</b>) 20201016gt2l—Our method; (<b>d</b>) 20201016gt2l—Traditional OPTICS.</p>
Full article ">Figure 14
<p>Bathymetric accuracy comparison of our method and that without coordinate correction. (<b>a</b>–<b>c</b>) on the first row correspond to 20190119gt3l, while (<b>d</b>–<b>f</b>) on the second row correspond to 20181024gt3r. The red and black lines represent the bathymetric results of the corresponding method and in situ data, respectively. (<b>a</b>) 20190119gt3r—Our method; (<b>b</b>) Without refraction correction; (<b>c</b>) Without tidal correction; (<b>d</b>) 20181024gt3r—Our method; (<b>e</b>) Without refraction correction; (<b>f</b>) Without tide correction.</p>
Full article ">
17 pages, 5927 KiB  
Article
Accuracy of Bathymetric Depth Change Maps Using Multi-Temporal Images and Machine Learning
by Kim Lowell and Joan Hermann
J. Mar. Sci. Eng. 2024, 12(8), 1401; https://doi.org/10.3390/jmse12081401 - 15 Aug 2024
Cited by 2 | Viewed by 795
Abstract
Most work to date on satellite-derived bathymetry (SDB) depth change estimates water depth at individual times t1 and t2 using two separate models and then differences the model estimates. An alternative approach is explored in this study: a multi-temporal Sentinel-2 image is created [...] Read more.
Most work to date on satellite-derived bathymetry (SDB) depth change estimates water depth at individual times t1 and t2 using two separate models and then differences the model estimates. An alternative approach is explored in this study: a multi-temporal Sentinel-2 image is created by “stacking” the bands of the times t1 and t2 images, geographically coincident reference data for times t1 and t2 allow for “true” depth change to be calculated for the pixels of the multi-temporal image, and this information is used to fit a single model that estimates depth change directly rather than indirectly as in the model-differencing approach. The multi-temporal image approach reduced the depth change RMSE by about 30%. The machine learning modelling method (categorical boosting) outperformed linear regression. Overfitting of models was limited even for the CatBoost models having the maximum number of variables examined. The visible Sentinel-2 spectral bands contributed most to the model predictions. Though the multi-temporal stacked image approach produced clearly superior depth change estimates compared to the conventional approach, it is limited only to those areas for which geographically coincident multi-temporal reference/“true” depth data exist. Full article
(This article belongs to the Special Issue Remote Sensing and GIS Applications for Coastal Morphodynamic Systems)
Show Figures

Figure 1

Figure 1
<p>Schematic of the conventional approach to using SDB to estimate depth change.</p>
Full article ">Figure 2
<p>Schematic of the proposed approach to estimating depth change using SDB.</p>
Full article ">Figure 3
<p>Data preparation and modelling schema.</p>
Full article ">Figure 4
<p>Study area location (latitude 30.32°/longitude −87.15°).</p>
Full article ">Figure 5
<p>Example of the sparsest sample employed (“1plus1”—see text) overlain on the “true”/reference bathymetric depth (The number of pixels sampled across all three ICESat-2 tracks is indicated in the legend. The “angles” are the two bearings of actual ICESat-2 overpasses for this area. The study area is located in UTM Zone 17).</p>
Full article ">Figure 6
<p>Average (over all sample types) root mean squared error for residuals for models fitted for train and test data sets. (Band combinations are 2: Visible (2 (blue), 3 (green), 4 (red)); 11: (Visible + 6 (visible and near-infrared (VNIR)) + 8 (VNIR)), 12: (Visible + PsBtG + PsBtR (Equations (1) and (2))); 17 (Visible + VNIR + PsBtG + PsBtR)).</p>
Full article ">Figure 7
<p>An example of the performance of one multi-temporal “stacked” image depth change model for the (<b>a</b>) training and (<b>b</b>) testing data sets for one model type (CatBoost in this case), sample type (“3plus3”), and band combination (17), and (<b>c</b>) the model that was fitted applied to the entire data set.</p>
Full article ">Figure 8
<p>An example of differences between “true”/LiDAR depth change and CatBoost model-estimated depth change for the entire study area (the study area is located in UTM Zone 17).</p>
Full article ">Figure 9
<p>Model performance metrics ((<b>a</b>,<b>c</b>): correlation/R<sup>2</sup>; (<b>b</b>,<b>d</b>): RMSE in m) for estimating depth change using individual SDB models for 2019 and 2021 and differencing the outputs. (Band combinations are 2: Visible (2 (blue), 3 (green), 4 (red)); 11: (Visible + 6 (very near infrared (VNIR)) + 8 (VNIR)), 12: (Visible + PsBtG + PsBtR (Equations (1) and (2))); 17 (Visible + VNIR + PsBtG + PsBtR).</p>
Full article ">Figure 10
<p>Model performance metrics ((<b>a,c</b>): correlation/R<sup>2</sup>; (<b>b,d</b>): RMSE in m) for estimating depth change using a single depth change model fitted using a multi-temporal “stacked” image. (Band combinations are 2: Visible (2 (blue), 3 (green), 4 (red)); 11: (Visible + 6 (very near-infrared (VNIR)) + 8 (VNIR)), 12: (Visible + PsBtG + PsBtR (Equations (1) and (2))); 17 (Visible + VNIR + PsBtG + PsBtR)).</p>
Full article ">Figure 11
<p>Average band importance over all samples and band combinations (Bands are B2: Blue; B3: Green; B4: Red; B6: very-near infrared (VNIR); B8: VNIR; PsBtG: pseudo-bathymetry green ratio; <span class="html-italic">×</span>: PsBtR: pseudo-bathymetry red ratio.).</p>
Full article ">
23 pages, 22355 KiB  
Article
Development of an Adaptive Fuzzy Integral-Derivative Line-of-Sight Method for Bathymetric LiDAR Onboard Unmanned Surface Vessel
by Guoqing Zhou, Jinhuang Wu, Ke Gao, Naihui Song, Guoshuai Jia, Xiang Zhou, Jiasheng Xu and Xia Wang
Remote Sens. 2024, 16(14), 2657; https://doi.org/10.3390/rs16142657 - 20 Jul 2024
Viewed by 896
Abstract
Previous control methods developed by our research team cannot satisfy the high accuracy requirements of unmanned surface vessel (USV) path-tracking during bathymetric mapping because of the excessive overshoot and slow convergence speed. For this reason, this study developed an adaptive fuzzy integral-derivative line-of-sight [...] Read more.
Previous control methods developed by our research team cannot satisfy the high accuracy requirements of unmanned surface vessel (USV) path-tracking during bathymetric mapping because of the excessive overshoot and slow convergence speed. For this reason, this study developed an adaptive fuzzy integral-derivative line-of-sight (AFIDLOS) method for USV path-tracking control. Integral and derivative terms were added to counteract the effect of the sideslip angle with which the USV could be quickly guided to converge to the planned path for bathymetric mapping. To obtain high accuracy of the look-ahead distance, a fuzzy control method was proposed. The proposed method was verified using simulations and outdoor experiments. The results demonstrate that the AFIDLOS method can reduce the overshoot by 79.85%, shorten the settling time by 55.32% in simulation experiments, reduce the average cross-track error by 10.91% and can ensure a 30% overlap of neighboring strips of bathymetric LiDAR outdoor mapping when compared with the traditional guidance law. Full article
(This article belongs to the Special Issue Optical Remote Sensing Payloads, from Design to Flight Test)
Show Figures

Figure 1

Figure 1
<p>Framework of path-tracking system, where <math display="inline"><semantics> <mrow> <msub> <mi>ψ</mi> <mi>d</mi> </msub> </mrow> </semantics></math> represents the desired heading angle; <math display="inline"><semantics> <mi>ψ</mi> </semantics></math> represents the actual heading angle; <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>ψ</mi> </mrow> </semantics></math> represents the difference between <math display="inline"><semantics> <mrow> <msub> <mi>ψ</mi> <mi>d</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mi>ψ</mi> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mi>e</mi> </msub> </mrow> </semantics></math> represents the cross-track error; <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>y</mi> <mo>˙</mo> </mover> <mi>e</mi> </msub> </mrow> </semantics></math> represents the change rate of the cross-track error; <math display="inline"><semantics> <mi>γ</mi> </semantics></math> represents the convergence rate; <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>n</mi> </mrow> </semantics></math> represents the control command; <math display="inline"><semantics> <mo>Δ</mo> </semantics></math> represents the look-ahead distance; and <math display="inline"><semantics> <mi>X</mi> </semantics></math> and <math display="inline"><semantics> <mi>Y</mi> </semantics></math> represent the latitude and longitude position of the USV, respectively.</p>
Full article ">Figure 2
<p>Principle of the proposed AFIDLOS.</p>
Full article ">Figure 3
<p>Membership functions of (<b>a</b>) cross-track error <math display="inline"><semantics> <mrow> <msub> <mi>y</mi> <mi>e</mi> </msub> </mrow> </semantics></math>, (<b>b</b>) change rate of cross-track error <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>y</mi> <mo>˙</mo> </mover> <mi>e</mi> </msub> </mrow> </semantics></math>, and (<b>c</b>) convergence rate <math display="inline"><semantics> <mi>γ</mi> </semantics></math>.</p>
Full article ">Figure 4
<p>Fuzzy input–output 3D surface view.</p>
Full article ">Figure 5
<p>The hardware architecture for USV control system.</p>
Full article ">Figure 6
<p>STM32 microcontroller.</p>
Full article ">Figure 7
<p>The programming software framework in STM32 for the proposed AFIDLOS method.</p>
Full article ">Figure 8
<p>Path of simulation, where A, B, C, D and E represent the points of the planned path; S represents the location of the USV.</p>
Full article ">Figure 9
<p>Schematic diagram of the simulation model.</p>
Full article ">Figure 10
<p>Comparison results of heading control with AFIDLOS method and LOS guidance law.</p>
Full article ">Figure 11
<p>Experimental verification in an artificial lake.</p>
Full article ">Figure 12
<p>Experimental results for the triangle path in an artificial lake. (<b>a</b>) Comparison of path (<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mn>11</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>12</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>13</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>11</mn> </mrow> </msub> </mrow> </semantics></math>); (<b>b</b>) comparison of cross-track error.</p>
Full article ">Figure 13
<p>Experimental results for the quadrilateral path in an artificial lake. (<b>a</b>) Comparison of path (<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mn>14</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>15</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>16</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>17</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>14</mn> </mrow> </msub> </mrow> </semantics></math>), and (<b>b</b>) comparison of cross-track error.</p>
Full article ">Figure 14
<p>Experiment in a natural lake.</p>
Full article ">Figure 15
<p>Experimental results for a triangular path in a natural lake. (<b>a</b>) Comparison of path <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>P</mi> <mrow> <mn>21</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>22</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>23</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>21</mn> </mrow> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math>, and (<b>b</b>) comparison of cross-track error.</p>
Full article ">Figure 16
<p>Experimental results for the quadrilateral path in a natural lake. (<b>a</b>) Comparison of path <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>P</mi> <mrow> <mn>24</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>25</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>26</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>27</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>24</mn> </mrow> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math>; (<b>b</b>) comparison of cross-track error.</p>
Full article ">Figure 17
<p>Verification in the Beibu Gulf.</p>
Full article ">Figure 18
<p>Experimental results for the quadrilateral path in the Beibu Gulf. (<b>a</b>) Comparison of path <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>P</mi> <mrow> <mn>31</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>32</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>33</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>34</mn> </mrow> </msub> <mo>→</mo> <msub> <mi>P</mi> <mrow> <mn>31</mn> </mrow> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math>; (<b>b</b>) comparison of cross-track error.</p>
Full article ">Figure 19
<p>The planned path in Pinqing Lake.</p>
Full article ">Figure 20
<p>Experimental results in Pinqing Lake. (<b>a</b>) Trajectory map using GQ-Cormorant 19; (<b>b</b>) point cloud data of the water surface; and (<b>c</b>) point cloud data of the water bottom.</p>
Full article ">
20 pages, 22937 KiB  
Article
A Combination of Remote Sensing Datasets for Coastal Marine Habitat Mapping Using Random Forest Algorithm in Pistolet Bay, Canada
by Sahel Mahdavi, Meisam Amani, Saeid Parsian, Candace MacDonald, Michael Teasdale, Justin So, Fan Zhang and Mardi Gullage
Remote Sens. 2024, 16(14), 2654; https://doi.org/10.3390/rs16142654 - 20 Jul 2024
Viewed by 1072
Abstract
Marine ecosystems serve as vital indicators of biodiversity, providing habitats for diverse flora and fauna. Canada’s extensive coastal regions encompass a rich range of marine habitats, necessitating accurate mapping techniques utilizing advanced technologies, such as remote sensing (RS). This study focused on a [...] Read more.
Marine ecosystems serve as vital indicators of biodiversity, providing habitats for diverse flora and fauna. Canada’s extensive coastal regions encompass a rich range of marine habitats, necessitating accurate mapping techniques utilizing advanced technologies, such as remote sensing (RS). This study focused on a study area in Pistolet Bay in Newfoundland and Labrador (NL), Canada, with an area of approximately 170 km2 and depths varying between 0 and −28 m. Considering the relatively large coverage and shallow depths of water of the study area, it was decided to use airborne bathymetric Light Detection and Ranging (LiDAR) data, which used green laser pulses, to map the marine habitats in this region. Along with this LiDAR data, Remotely Operated Vehicle (ROV) footage, high-resolution multispectral drone imagery, true color Google Earth (GE) imagery, and shoreline survey data were also collected. These datasets were preprocessed and categorized into five classes of Eelgrass, Rockweed, Kelp, Other vegetation, and Non-Vegetation. A marine habitat map of the study area was generated using the features extracted from LiDAR data, such as intensity, depth, slope, and canopy height, using an object-based Random Forest (RF) algorithm. Despite multiple challenges, the resulting habitat map exhibited a commendable classification accuracy of 89%. This underscores the efficacy of the developed Artificial Intelligence (AI) model for future marine habitat mapping endeavors across the country. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Pistolet Bay’s position (indicated with the purple color) at the northern tip of Newfoundland Island, Canada, and (<b>b</b>) the boundary of the study area is enclosed with the purple line.</p>
Full article ">Figure 2
<p>Ground truth survey types and locations in Pistolet Bay.</p>
Full article ">Figure 3
<p>Frequently observed marine habitat types (<b>A</b>) eelgrass, (<b>B</b>) sugar kelp, (<b>C</b>) rockweed <span class="html-italic">Fucus</span> sp., (<b>D</b>) rockweed <span class="html-italic">Ascophyllum nodosum</span>, (<b>E</b>) brown filamentous algae, and (<b>F</b>) crustose coralline algae in the ROV point surveys.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>d</b>) The generated true color drone orthomosaics from four locations in Pistolet Bay. The locations of these images are indicated with D1 to D4 in <a href="#remotesensing-16-02654-f002" class="html-fig">Figure 2</a>, respectively. The purple lines in the figures indicate the waterline.</p>
Full article ">Figure 5
<p>Flowchart of the proposed remote sensing method for marine habitat mapping.</p>
Full article ">Figure 6
<p>A screenshot of an ROV video from one of the transects (A1T1, indicated by the red line), showing an area that was considered Non-Vegetation despite presence of sparse vegetation. The purple curve indicates the waterline in the study area.</p>
Full article ">Figure 7
<p>(<b>a</b>) True color drone imagery showing exposed (orange) and submerged (green) rockweed; (<b>b</b>) the location of rockweed derived from filtering the drone imagery; (<b>c</b>) the final boundary of the rockweed (cyan line) delineated manually using the imagery.</p>
Full article ">Figure 8
<p>The products derived from airborne bathymetric LiDAR point cloud data. (<b>a</b>) Water depth, (<b>b</b>) Digital Surface Model (DSM), (<b>c</b>) Canopy Height Model (CHM), (<b>d</b>) slope, and (<b>e</b>) intensity.</p>
Full article ">Figure 9
<p>The final produced marine habitat map over the study area along with two zoomed areas and their corresponding classified maps.</p>
Full article ">
20 pages, 3978 KiB  
Article
Application and Evaluation of the AI-Powered Segment Anything Model (SAM) in Seafloor Mapping: A Case Study from Puck Lagoon, Poland
by Łukasz Janowski and Radosław Wróblewski
Remote Sens. 2024, 16(14), 2638; https://doi.org/10.3390/rs16142638 - 18 Jul 2024
Cited by 1 | Viewed by 1377
Abstract
The digital representation of seafloor, a challenge in UNESCO’s Ocean Decade initiative, is essential for sustainable development support and marine environment protection, aligning with the United Nations’ 2030 program goals. Accuracy in seafloor representation can be achieved through remote sensing measurements, including acoustic [...] Read more.
The digital representation of seafloor, a challenge in UNESCO’s Ocean Decade initiative, is essential for sustainable development support and marine environment protection, aligning with the United Nations’ 2030 program goals. Accuracy in seafloor representation can be achieved through remote sensing measurements, including acoustic and laser sources. Ground truth information integration facilitates comprehensive seafloor assessment. The current seafloor mapping paradigm benefits from the object-based image analysis (OBIA) approach, managing high-resolution remote sensing measurements effectively. A critical OBIA step is the segmentation process, with various algorithms available. Recent artificial intelligence advancements have led to AI-powered segmentation algorithms development, like the Segment Anything Model (SAM) by META AI. This paper presents the SAM approach’s first evaluation for seafloor mapping. The benchmark remote sensing dataset refers to Puck Lagoon, Poland and includes measurements from various sources, primarily multibeam echosounders, bathymetric lidar, airborne photogrammetry, and satellite imagery. The SAM algorithm’s performance was evaluated on an affordable workstation equipped with an NVIDIA GPU, enabling CUDA architecture utilization. The growing popularity and demand for AI-based services predict their widespread application in future underwater remote sensing studies, regardless of the measurement technology used (acoustic, laser, or imagery). Applying SAM in Puck Lagoon seafloor mapping may benefit other seafloor mapping studies intending to employ AI technology. Full article
(This article belongs to the Special Issue Advanced Remote Sensing Technology in Geodesy, Surveying and Mapping)
Show Figures

Figure 1

Figure 1
<p>Geographical representation of the study site and remote sensing datasets used as benchmark in this study: (<b>a</b>) location of the study site within the central Europe, marked by red area; (<b>b</b>) MBES bathymetry; (<b>c</b>) MBES backscatter; (<b>d</b>) bathymetric LiDAR intensity; (<b>e</b>) orthophoto of the study site; (<b>f</b>) SDB; (<b>g</b>) joint DEM generated by integration of MBES and ALB bathymetries.</p>
Full article ">Figure 2
<p>Detailed flow chart of the methods used in this study.</p>
Full article ">Figure 3
<p>The side-by-side presentation of the spatial results of SAM allowing for a comparative analysis of different parameters of SAM application and manual discrimination. Image segments were outlined by black border over SDB bathymetry: (<b>a</b>) SAM algorithm, ViT−B model type, and 3 m pixel size; (<b>b</b>) SAM algorithm, ViT−B model type, and 4 m pixel size; (<b>c</b>) SAM algorithm, ViT−B model type, and 5 m pixel size; (<b>d</b>) SAM algorithm, ViT−L model type, and 5 m pixel size; (<b>e</b>) SAM + MRS algorithms, ViT−B model type, and 4 m pixel size; (<b>f</b>) SAM + MRS algorithms, ViT−B model type, and 5 m pixel size; (<b>g</b>) SAM + MRS algorithms, ViT−L model type, and 4 m pixel size; (<b>h</b>) SAM + MRS algorithms, ViT−L model type, and 5 m pixel size; (<b>i</b>) result of manual image segmentation by expert interpretation.</p>
Full article ">Figure 4
<p>(<b>a</b>) The map presents a comparative analysis of the results obtained from the application of three methods: SAM + MRS (represented by a black solid line), manual delineation (depicted by a grey dashed line), and MRS with the RF algorithm (the classification of which is expressed in a color scale). (<b>b</b>) The map presents result of RF classification over SAM + MRS outcome. Distinct types of bedforms in both maps have been identified and named according to the “symbol names” column in <a href="#remotesensing-16-02638-t002" class="html-table">Table 2</a>.</p>
Full article ">
30 pages, 20734 KiB  
Article
Automatic Classification of Submerged Macrophytes at Lake Constance Using Laser Bathymetry Point Clouds
by Nike Wagner, Gunnar Franke, Klaus Schmieder and Gottfried Mandlburger
Remote Sens. 2024, 16(13), 2257; https://doi.org/10.3390/rs16132257 - 21 Jun 2024
Viewed by 1152
Abstract
Submerged aquatic vegetation, also referred to as submerged macrophytes, provides important habitats and serves as a significant ecological indicator for assessing the condition of water bodies and for gaining insights into the impacts of climate change. In this study, we introduce a novel [...] Read more.
Submerged aquatic vegetation, also referred to as submerged macrophytes, provides important habitats and serves as a significant ecological indicator for assessing the condition of water bodies and for gaining insights into the impacts of climate change. In this study, we introduce a novel approach for the classification of submerged vegetation captured with bathymetric LiDAR (Light Detection And Ranging) as a basis for monitoring their state and change, and we validated the results against established monitoring techniques. Employing full-waveform airborne laser scanning, which is routinely used for topographic mapping and forestry applications on dry land, we extended its application to the detection of underwater vegetation in Lake Constance. The primary focus of this research lies in the automatic classification of bathymetric 3D LiDAR point clouds using a decision-based approach, distinguishing the three vegetation classes, (i) Low Vegetation, (ii) High Vegetation, and (iii) Vegetation Canopy, based on their height and other properties like local point density. The results reveal detailed 3D representations of submerged vegetation, enabling the identification of vegetation structures and the inference of vegetation types with reference to pre-existing knowledge. While the results within the training areas demonstrate high precision and alignment with the comparison data, the findings in independent test areas exhibit certain deficiencies that are likely addressable through corrective measures in the future. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area at the country (<b>a</b>) and local level (<b>b</b>). Distribution of areas of interest (AOIs) and test areas T1 and T2 at the Lake Constance Lower Lake (<b>c</b>). Coordinate Reference System: ETRS89/UTM zone 32N.</p>
Full article ">Figure 2
<p>Airborne laser scanning (ALS) processing chain applied for automatic classification of submerged macrophytes.</p>
Full article ">Figure 3
<p>Coverage of an AOI polygon using different flight strips (<span style="color:gray"><span class="html-italic">PointId</span></span>s).</p>
Full article ">Figure 4
<p>Processing chain of vegetation candidate classification.</p>
Full article ">Figure 5
<p>Measure of the local 3D point density (variable <span style="color:gray"><span class="html-italic">dist_all</span></span>, which indicates the sum of the distances to the next 20 neighboring points and is therefore inversely proportional to the point density) as a detection feature for <span class="html-italic">Low Vegetation</span> in LiDAR point cloud demonstrated on a cross section.</p>
Full article ">Figure 6
<p><span style="color:gray"><span class="html-italic">Reflectance</span></span> values as a detection feature for <span class="html-italic">High Vegetation</span> in LiDAR point cloud demonstrated on a point cloud cross section.</p>
Full article ">Figure 7
<p><span style="color:gray"><span class="html-italic">NumberOfReturns</span></span> values as a detection feature for <span class="html-italic">Vegetation Canopy</span> in LiDAR point cloud demonstrated on a point cloud cross section.</p>
Full article ">Figure 8
<p>Visualization of the DSM calculation principle for each vegetation class based on a cross section. Candidate points for <span class="html-italic">Low Vegetation</span> (green), <span class="html-italic">High Vegetation</span> (light green), and <span class="html-italic">Vegetation Canopy</span> (orange).</p>
Full article ">Figure 9
<p>Classification order for automatic point cloud classification using DSMs.</p>
Full article ">Figure 10
<p>Results of the automatic classification of tile ETL4.</p>
Full article ">Figure 11
<p>Comparison of classification results (only candidate points) (<b>a</b>) with orthophoto (<b>b</b>) and field survey-supported aerial photo interpretation (<b>c</b>) for ETL4.</p>
Full article ">Figure 12
<p>Legend for aerial photo-based classification compared to LiDAR data-based classification classes.</p>
Full article ">Figure 13
<p>Results of the automatic classification of tile ETN2; (<b>a</b>) top view and (<b>b</b>) selected cross section.</p>
Full article ">Figure 14
<p>Results of the automatic point cloud classification of test area T2 (<b>a</b>) and selected cross section (<b>b</b>) illustrating the structure of the class <span class="html-italic">High Vegetation</span> within the water column. Only candidate points are presented.</p>
Full article ">Figure 15
<p>Indicator variable of the candidate classification of class <span class="html-italic">Vegetation Canopy</span> (<span style="color:gray"><span class="html-italic">dist4nn</span></span>) in (<b>a</b>) an orthophoto of (<b>b</b>) test area T2. Polygons of the aerial photo interpretation are superimposed on both.</p>
Full article ">Figure A1
<p>Results of the automatic classification of tile ETL1.</p>
Full article ">Figure A2
<p>Comparison of classification results (only candidate points) (<b>a</b>) with orthophoto (<b>b</b>) and field survey-supported aerial photo interpretation (<b>c</b>) (legend presented in <a href="#remotesensing-16-02257-f012" class="html-fig">Figure 12</a>) for ETL1.</p>
Full article ">Figure A3
<p>Results of the automatic classification of tile ETL2.</p>
Full article ">Figure A4
<p>Comparison of classification results (only candidate points) (<b>a</b>) with orthophoto (<b>b</b>) and field survey-supported aerial photo interpretation (<b>c</b>) (legend presented in <a href="#remotesensing-16-02257-f012" class="html-fig">Figure 12</a>) for ETL2.</p>
Full article ">Figure A5
<p>Results of the automatic classification of tile ETL3.</p>
Full article ">Figure A6
<p>Comparison of classification results (only candidate points) (<b>a</b>) with orthophoto (<b>b</b>) and field survey-supported aerial photo interpretation (<b>c</b>) (legend presented in <a href="#remotesensing-16-02257-f012" class="html-fig">Figure 12</a>) for ETL3.</p>
Full article ">Figure A7
<p>Results of the automatic classification of tile ETL5.</p>
Full article ">Figure A8
<p>Comparison of classification results (only candidate points) (<b>a</b>) with orthophoto (<b>b</b>) and field survey-supported aerial photo interpretation (<b>c</b>) (legend presented in <a href="#remotesensing-16-02257-f012" class="html-fig">Figure 12</a>) for ETL5.</p>
Full article ">Figure A9
<p>Results of the automatic classification of tile ETN1.</p>
Full article ">Figure A10
<p>Comparison of classification results (only candidate points) (<b>a</b>) with orthophoto (<b>b</b>) and field survey-supported aerial photo interpretation (<b>c</b>) (legend presented in <a href="#remotesensing-16-02257-f012" class="html-fig">Figure 12</a>) for ETN1.</p>
Full article ">Figure A11
<p>Results of the automatic classification of tile ETN2.</p>
Full article ">Figure A12
<p>Comparison of classification results (only candidate points) (<b>a</b>) with orthophoto (<b>b</b>) and field survey-supported aerial photo interpretation (<b>c</b>) (legend presented in <a href="#remotesensing-16-02257-f012" class="html-fig">Figure 12</a>) for ETN2.</p>
Full article ">Figure A13
<p>Results of the automatic classification of tile ETN3.</p>
Full article ">Figure A14
<p>Comparison of classification results (only candidate points) (<b>a</b>) with orthophoto (<b>b</b>) and field survey-supported aerial photo interpretation (<b>c</b>) (legend presented in <a href="#remotesensing-16-02257-f012" class="html-fig">Figure 12</a>) for ETN3.</p>
Full article ">Figure A15
<p>Results of the automatic classification of tile ETN4.</p>
Full article ">Figure A16
<p>Comparison of classification results (only candidate points) (<b>a</b>) with orthophoto (<b>b</b>) and field survey-supported aerial photo interpretation (<b>c</b>) (legend presented in <a href="#remotesensing-16-02257-f012" class="html-fig">Figure 12</a>) for ETN4.</p>
Full article ">Figure A17
<p>Results of the automatic classification of tile ETN8.</p>
Full article ">Figure A18
<p>Comparison of classification results (only candidate points) (<b>a</b>) with orthophoto (<b>b</b>) and field survey-supported aerial photo interpretation (<b>c</b>) (legend presented in <a href="#remotesensing-16-02257-f012" class="html-fig">Figure 12</a>) for ETN8.</p>
Full article ">Figure A19
<p>Comparison of classification results (only candidate points) (<b>a</b>,<b>b</b>) with orthophoto (<b>c</b>,<b>d</b>) for T1 (<b>a</b>,<b>c</b>) and T2 (<b>b</b>,<b>d</b>). In each figure, the polygon boundaries of the field survey-supported aerial photo interpretation are depicted in the background.</p>
Full article ">
13 pages, 3343 KiB  
Article
The Influence of Refractive Index Changes in Water on Airborne LiDAR Bathymetric Errors
by Xingyuan Xiao, Zhengkun Jiang, Wenxue Xu, Yadong Guo, Yanxiong Liu and Zhen Guo
J. Mar. Sci. Eng. 2024, 12(3), 435; https://doi.org/10.3390/jmse12030435 - 29 Feb 2024
Viewed by 1819
Abstract
Due to the limitations of measurement equipment and the influence of factors such as the environment and target, measurement errors may occur during the data acquisition process of airborne LiDAR bathymetry (ALB). The refractive index of water is defined as the propagation ratio [...] Read more.
Due to the limitations of measurement equipment and the influence of factors such as the environment and target, measurement errors may occur during the data acquisition process of airborne LiDAR bathymetry (ALB). The refractive index of water is defined as the propagation ratio of the speed of light waves in a vacuum to that in water; this ratio influences not only the propagation speed of the laser pulse in water but also the propagation direction of the laser pulse entering water. Therefore, the influence of refractive index changes in water on the ALB errors needs to be analyzed. To this end, the principle of ALB is first briefly introduced. Then, the calculation method for the refractive index of water is described with Snell’s law and an empirical formula. Finally, the influence of refractive index changes on ALB errors is analyzed using the derived formula at the water–air interface and in the water column. The experimental results showed that in a constant elevation of 50 m for a bathymetric floor, the refractive index changes in water caused by temperature, salinity, and depth are less than 0.001. The maximum bathymetric error and maximum planimetric error caused by the refractive index changes at the water–air interface are 0.036 m and 0.015 m, respectively. The ALB errors caused by refractive index changes in the water column are relatively low, and the water column does not need to be layered to calculate the ALB errors. The influence of refractive index changes in water on the ALB error is minimal, accounting for only a small proportion of all bathymetric errors. Thus, it is necessary to determine whether the effect of the ALB error due to refractive index changes in water needs to be corrected based on the accuracy requirements of the data acquisition. This study and analysis can provide a reference basis for correcting ALB errors. Full article
(This article belongs to the Section Geological Oceanography)
Show Figures

Figure 1

Figure 1
<p>Principle schematic of dual-frequency ALB.</p>
Full article ">Figure 2
<p>Diagram of the ALB error caused by the influence of the refractive index at the water–air interface.</p>
Full article ">Figure 3
<p>Diagram of layered water refraction propagated by laser pulses.</p>
Full article ">Figure 4
<p>Distribution of the CTD data sampling points. (<b>a</b>) Location of the sampling point A in the South China Sea; (<b>b</b>) Locations of the sampling points B, C and D in the Gulf of Mexico.</p>
Full article ">Figure 5
<p>The calculated refractive index and measured temperature and salinity as a function of seawater depth. (<b>a1</b>–<b>d1</b>) Relationships of the refractive indices with water depth at sampling points A, B, C, and D, respectively; (<b>a2</b>–<b>d2</b>) Relationships of seawater salinity with water depth at sampling points A, B, C, and D, respectively; (<b>a3</b>–<b>d3</b>) Relationships of seawater temperature with water depth at sampling points A, B, C, and D, respectively.</p>
Full article ">Figure 6
<p>Diagrams of the changes in the refractive indices with depth, temperature, and salinity of seawater. (<b>a</b>) Refractive indices with seawater depth and salinity; (<b>b</b>) Refractive indices with seawater depth and temperature; (<b>c</b>) Refractive indices with seawater salinity and temperature.</p>
Full article ">Figure 7
<p>Relationships between bathymetric error and the refractive index of water. (<b>a</b>,<b>b</b>) Bathymetric errors caused by different water depths when the incidence angle is 15°. (<b>c</b>,<b>d</b>) Bathymetric errors caused by different incidence angles when the water depth is 50 m.</p>
Full article ">Figure 8
<p>Relationships between the planimetric error and the refractive index of water. (<b>a</b>,<b>b</b>) Planimetric errors caused by the different water depths when the incidence angle is 15°; (<b>c</b>,<b>d</b>) Planimetric errors caused by the different incidence angles when the water depth is 50 m.</p>
Full article ">Figure 9
<p>Changes in the ALB error with the layer depth of the seawater column at sampling point A. (<b>a</b>) Bathymetric error; (<b>b</b>) planimetric error.</p>
Full article ">Figure 10
<p>Changes in the ALB error with layer depth in the seawater column at sampling point B. (<b>a</b>) Bathymetric error; (<b>b</b>) planimetric error.</p>
Full article ">Figure 11
<p>Changes in the ALB error of the simulated data with the layer depth of seawater. (<b>a</b>) Bathymetric error; (<b>b</b>) planimetric error; (<b>c</b>) simulated refractive index of the seawater column.</p>
Full article ">
30 pages, 34635 KiB  
Article
Innovative Maritime Uncrewed Systems and Satellite Solutions for Shallow Water Bathymetric Assessment
by Laurențiu-Florin Constantinoiu, António Tavares, Rui Miguel Cândido and Eugen Rusu
Inventions 2024, 9(1), 20; https://doi.org/10.3390/inventions9010020 - 5 Feb 2024
Cited by 1 | Viewed by 2558
Abstract
Shallow water bathymetry is a topic of significant interest in various fields, including civil construction, port monitoring, and military operations. This study presents several methods for assessing shallow water bathymetry using maritime uncrewed systems (MUSs) integrated with advanced and innovative sensors such as [...] Read more.
Shallow water bathymetry is a topic of significant interest in various fields, including civil construction, port monitoring, and military operations. This study presents several methods for assessing shallow water bathymetry using maritime uncrewed systems (MUSs) integrated with advanced and innovative sensors such as Light Detection and Ranging (LiDAR) and multibeam echosounder (MBES). Furthermore, this study comprehensively describes satellite-derived bathymetry (SDB) techniques within the same geographical area. Each technique is thoroughly outlined with respect to its implementation and resultant data, followed by an analytical comparison encompassing their accuracy, precision, rapidness, and operational efficiency. The accuracy and precision of the methods were evaluated using a bathymetric reference survey conducted with traditional means, prior to the MUS survey and with cross-comparisons between all the approaches. In each assessment of the survey methodologies, a comprehensive evaluation is conducted, explaining both the advantages and limitations for each approach, thereby enabling an inclusive understanding for the reader regarding the efficacy and applicability of these methods. The experiments were conducted as part of the Robotic Experimentation and Prototyping using Maritime Unmanned Systems 23 (REPMUS23) multinational exercise, which was part of the Rapid Environmental Assessment (REA) experimentations. Full article
(This article belongs to the Special Issue From Sensing Technology towards Digital Twin in Applications)
Show Figures

Figure 1

Figure 1
<p>Overview of the survey areas.</p>
Full article ">Figure 2
<p>Meteorological conditions for the study area—significant wave height (Hs), wave period (T02), and wave direction (Dp) from the WW3 model, for the period of surveys (red area).</p>
Full article ">Figure 3
<p>Schiebel CAMCOPTER<sup>®</sup> S-100 with Areté PILLS (RAMMS) LiDAR system [<a href="#B41-inventions-09-00020" class="html-bibr">41</a>].</p>
Full article ">Figure 4
<p>Hydrographic survey of the DriX USV during the REPMUS23 exercise.</p>
Full article ">Figure 5
<p>Patch test survey lines for the DriX USV.</p>
Full article ">Figure 6
<p>Subset of the DriX USV multibeam data suggesting sound speed was correctly sampled and applied, allowing for the proper representation of a flat seafloor (each colour represents a different survey line).</p>
Full article ">Figure 7
<p>Subset of the reference survey multibeam data suggesting sound speed was correctly sampled and applied, allowing for the proper representation of a flat seafloor (each colour represents a different survey line).</p>
Full article ">Figure 8
<p>Overview map depicting the general location of the OFFSHORE area, and a detailed plan representing the bathymetric coverage for each dataset.</p>
Full article ">Figure 9
<p>Tide table used for the survey period showing the height of the tide. Available at [<a href="#B57-inventions-09-00020" class="html-bibr">57</a>].</p>
Full article ">Figure 10
<p>Reference bathymetric surface.</p>
Full article ">Figure 11
<p>Reference bathymetric survey statistics, (<b>a</b>)—node density, (<b>b</b>)—node standard deviation.</p>
Full article ">Figure 12
<p>USV MB bathymetric surface.</p>
Full article ">Figure 13
<p>USV MB bathymetric survey statistics, (<b>a</b>)—node density, (<b>b</b>)—node standard deviation.</p>
Full article ">Figure 14
<p>UAV LiDAR bathymetric surface.</p>
Full article ">Figure 15
<p>UAV LIDAR bathymetric survey statistics, (<b>a</b>)—node density, (<b>b</b>)—node standard deviation.</p>
Full article ">Figure 16
<p>(<b>a</b>)—location of the vertical transversal profile between the shallow and deep surfaces (<b>b</b>)—inconsistency step between the shallow and deep LiDAR UAV surfaces (for the profile selected between the two coloured squares).</p>
Full article ">Figure 17
<p>SDB mean values (expressed in meters).</p>
Full article ">Figure 18
<p>Horizontal accuracy assessment of the bathymetric surfaces. (<b>a</b>)—MB USV, LiDAR UAV, and the reference survey overlaid focusing on a submerged sand formation, (<b>b</b>)—focus on the same submerged sand formation, showing the match of all the 3 surfaces.</p>
Full article ">Figure 19
<p>Difference surface between reference survey and UAV LIDAR survey.</p>
Full article ">Figure 20
<p>LiDAR UAV along-track artefacts shown with 20× vertical exaggeration.</p>
Full article ">Figure 21
<p>Difference surface between reference survey and SDB.</p>
Full article ">Figure 22
<p>Difference surface between UAV LIDAR survey and USV MB survey.</p>
Full article ">Figure 23
<p>LiDAR UAV and SDB surface differences vs. reference survey histogram.</p>
Full article ">Figure 24
<p>Vertical profile differences between the reference survey, UAV LiDAR, USV MBES, and SDB.</p>
Full article ">Figure 25
<p>Reference surface depth contours vs. ENC depth contours.</p>
Full article ">Figure 26
<p>MBES USV depth contours vs. ENC depth contours.</p>
Full article ">Figure 27
<p>LiDAR UAV depth contours vs. ENC depth contours.</p>
Full article ">Figure 28
<p>SDB depth contours vs. ENC depth contours.</p>
Full article ">
17 pages, 3815 KiB  
Article
Estimation of Silting Evolution in the Camastra Reservoir and Proposals for Sediment Recovery
by Audrey Maria Noemi Martellotta, Daniel Levacher, Francesco Gentile and Alberto Ferruccio Piccinni
J. Mar. Sci. Eng. 2024, 12(2), 250; https://doi.org/10.3390/jmse12020250 - 30 Jan 2024
Viewed by 1436
Abstract
The reduction in the usable capacity of reservoirs, which is linked to the ongoing silting phenomenon, has led to the need to remove sediments to allow the storage of greater quantities of water resources. At the same time, however, the removal of sediment [...] Read more.
The reduction in the usable capacity of reservoirs, which is linked to the ongoing silting phenomenon, has led to the need to remove sediments to allow the storage of greater quantities of water resources. At the same time, however, the removal of sediment from the bottom results in the need to manage a large quantity of materials, for which the current prospect of discharge is both economically and environmentally unsustainable. This research work concerns the assessment of the silting volume increment of the Camastra reservoir and the phenomenon of progressing speed based on topographic and bathymetric surveys carried out in September 2022 through the use of a DJI Matrice 300 RTK drone with ZENMUSE L1 LiDAR technology, multibeam surveys, and geophysical prospecting using a sub-bottom profiler. It was possible to estimate the increase in dead volume and compare this value with that obtained from the surveys through a literature calculation model and previous silting data. The used model, which slightly underestimates the silting phenomenon, estimates the volume of accumulated sediment from the original capacity of the reservoir, which is understood as the volume that can be filled with sediment in an infinite time, from which an amount is removed depending on the characteristic time scale of reservoir filling and the level of complexity of the silting phenomenon for a specific reservoir. Furthermore, there is evidence of an increase in the speed of sediment accumulation, which is linked to the more frequent occurrence of high-intensity and short-duration meteoric events caused by climate change, which can lead to an increase in erosion and transport phenomena. Further evidence is provided by the occupation of approximately 50% of the Camastra’s reservoir capacity, which makes sediment dredging policies and interventions a priority, contributing to the practical significance of the present study. In this regard, the main recovery and reuse alternatives are identified and analyzed to make the removal of accumulated material environmentally and economically sustainable, such as through environmental and material recovery applications, with a preference for applications for which sediment pretreatment is not necessary. Full article
Show Figures

Figure 1

Figure 1
<p>Study area: Basilicata (light gray), Camastra watershed (green), Basento river (blue).</p>
Full article ">Figure 2
<p>Surveys carried out at the Camastra reservoir: (<b>a</b>) drone survey, (<b>b</b>) GPS survey, (<b>c</b>) and (<b>d</b>) morpho-bathymetric and sub-bottom profiler surveys.</p>
Full article ">Figure 3
<p>Multiple reflections found in the sub-bottom profiler survey of the Camastra reservoir.</p>
Full article ">Figure 4
<p>Examples of structures and objects present within the Camastra reservoir: (<b>a</b>) dam weir; (<b>b</b>) boulder or vegetation present on the seabed; (<b>c</b>) bridge connecting two banks of the reservoir.</p>
Full article ">Figure 5
<p>Silting evolution over the years and values forecast with model calculations up to 2022 (green point).</p>
Full article ">Figure 6
<p>Evolution of the silting phenomenon over the years, forecast with model calculations up to 2022 (green point), and identification of the average value measured following the in situ survey in 2022 (red point), where the collected data are data from previous surveys and the measured value is the value deriving from the 2022 in situ survey.</p>
Full article ">Figure 7
<p>Comparison of seabed elevations measured in the 2017 survey (blue) and 2022 survey (orange).</p>
Full article ">Figure 8
<p>Overview of the processes for dredging and sediment recycling.</p>
Full article ">Figure 9
<p>Panel of potential beneficial uses of the Camastra reservoir sediments.</p>
Full article ">
25 pages, 15769 KiB  
Article
Nearshore Bathymetry from ICESat-2 LiDAR and Sentinel-2 Imagery Datasets Using Physics-Informed CNN
by Congshuang Xie, Peng Chen, Siqi Zhang and Haiqing Huang
Remote Sens. 2024, 16(3), 511; https://doi.org/10.3390/rs16030511 - 29 Jan 2024
Cited by 6 | Viewed by 2145
Abstract
The recently developed Ice, Cloud, and Land Elevation Satellite 2 (ICESat-2), furnished with the Advanced Terrain Laser Altimeter System (ATLAS), delivers considerable benefits in providing accurate bathymetric data across extensive geographical regions. By integrating active lidar-derived reference seawater depth data with passive optical [...] Read more.
The recently developed Ice, Cloud, and Land Elevation Satellite 2 (ICESat-2), furnished with the Advanced Terrain Laser Altimeter System (ATLAS), delivers considerable benefits in providing accurate bathymetric data across extensive geographical regions. By integrating active lidar-derived reference seawater depth data with passive optical remote sensing imagery, efficient bathymetry mapping is facilitated. In recent times, machine learning models are frequently used to define the nonlinear connection between remote sensing spectral data and water depths, which consequently results in the creation of bathymetric maps. A salient model among these is the convolutional neural network (CNN), which effectively integrates contextual information concerning bathymetric points. However, current CNN models and other machine learning approaches mainly concentrate on recognizing mathematical relationships within the data to determine a water depth function and remote sensing spectral data, while oftentimes disregarding the physical light propagation process in seawater before reaching the seafloor. This study presents a physics-informed CNN (PI-CNN) model which incorporates radiative transfer-based data into the CNN structure. By including the shallow water double-band radiative transfer physical term (swdrtt), this model enhances seawater spectral features and also considers the context surroundings of bathymetric pixels. The effectiveness and reliability of our proposed PI-CNN model are verified using in situ data from St. Croix and St. Thomas, validating its correctness in generating bathymetric maps with a broad experimental R2 accuracy exceeding 95% and remaining errors below 1.6 m. Preliminary results suggest that our PI-CNN model surpasses conventional methodologies. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Distribution of study regions. The base map comes from The World Ocean Reference on Arcmap 10.5.</p>
Full article ">Figure 2
<p>Flowchart of the proposed PI-CNN model.</p>
Full article ">Figure 3
<p>The CNN model structure.</p>
Full article ">Figure 4
<p>Extraction results of seafloor points from ICESat-2 ATL03 data using the standard DBSCAN and AE-DBSCAN. (<b>a</b>) Standard DBSCAN with <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">ε</mi> <mo>=</mo> <mn>1.2</mn> <mo> </mo> </mrow> </semantics></math> m and <math display="inline"><semantics> <mrow> <mi mathvariant="normal">M</mi> <mi mathvariant="normal">i</mi> <mi mathvariant="normal">n</mi> <mi mathvariant="normal">p</mi> <mi mathvariant="normal">t</mi> <mi mathvariant="normal">s</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>; (<b>b</b>) adaptive ellipse DBSCAN. The raw photons are shown in grey, the surface photons are shown in blue, the detected seafloor photons are shown in green, and the refracted seafloor photons are shown in red, respectively.</p>
Full article ">Figure 5
<p>Sentinel-2 L2A true-color images in (<b>a</b>) St. Croix, (<b>b</b>) St. Thomas, (<b>c</b>) Anegada, and (<b>d</b>) Barbuda. ICESat-2 laser trajectories at different dates are shown by yellow lines.</p>
Full article ">Figure 6
<p>ODW/ODW classification results using NN in (<b>a</b>) St. Croix, (<b>b</b>) St. Thomas, (<b>c</b>) Anegada, and (<b>d</b>) Barbuda. Land is shown in yellow, ODW areas are shown in dark blue, and OSW areas are shown in light blue, respectively.</p>
Full article ">Figure 7
<p>Accuracy assessment with different window sizes.</p>
Full article ">Figure 8
<p>Absolute error maps in St. Croix with different window sizes: (<b>a</b>) 3 × 3, (<b>b</b>) 7 × 7, (<b>c</b>) 9 × 9, and (<b>d</b>) 11 × 11.</p>
Full article ">Figure 9
<p>Error plots of (<b>a</b>) ICESat-2 reference bathymetric point depths vs. in situ depths in St. Croix; (<b>b</b>) ICESat-2 reference bathymetric point depths vs. in situ depths in St. Thomas.</p>
Full article ">Figure 10
<p>(<b>a</b>) In situ map in St. Croix, (<b>b</b>) PI-CNN-derived bathymetric map in St. Croix, (<b>c</b>) in situ map in St. Thomas, (<b>d</b>) PI-CNN-derived bathymetric map in St. Thomas, (<b>e</b>) error plot of PI-CNN-estimated depths vs. in situ depths in St. Croix, and (<b>f</b>) error plot of PI-CNN-estimated depths vs. in situ depths in St. Thomas.</p>
Full article ">Figure 11
<p>Error plots of PI-CNN-estimated depths vs. ICESat-2-stimated depths in different sites: (<b>a</b>) St. Croix, (<b>b</b>) St. Thomas, (<b>c</b>) Anegada, and (<b>d</b>) Barbuda.</p>
Full article ">Figure 12
<p>PI-CNN-derived bathymetric map in (<b>a</b>) Anegada and (<b>b</b>) Barbuda.</p>
Full article ">Figure 13
<p>Accuracy assessment plots from 0 to 20 m with different bands combination in St. Croix (<b>a</b>) R<sup>2</sup> and (<b>b</b>) RMSE.</p>
Full article ">Figure 14
<p>Absolute error maps with different bands combination in St. Croix: (<b>a</b>) Bs_3, (<b>b</b>) B_3, (<b>c</b>) Bs_6, (<b>d</b>) B_6, (<b>e</b>) Bs_9, and (<b>f</b>) B_9.</p>
Full article ">Figure 15
<p>Scatterplots of (<b>a</b>) green vs. blue reflectance, (<b>b</b>) ln(green) vs. ln(blue), (<b>c</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">l</mi> <mi mathvariant="normal">n</mi> <mo>(</mo> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>g</mi> <mi>r</mi> <mi>e</mi> <mi>e</mi> <mi>n</mi> </mrow> </msub> <mo>−</mo> <mover accent="true"> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>O</mi> <mi>D</mi> <mi>W</mi> <mi>g</mi> <mi>r</mi> <mi>e</mi> <mi>e</mi> <mi>n</mi> </mrow> </msub> </mrow> <mo>¯</mo> </mover> <mo>)</mo> </mrow> </semantics></math> vs. <math display="inline"><semantics> <mrow> <mi mathvariant="normal">l</mi> <mi mathvariant="normal">n</mi> <mo>(</mo> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>b</mi> <mi>l</mi> <mi>u</mi> <mi>e</mi> </mrow> </msub> <mo>−</mo> <mover accent="true"> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>O</mi> <mi>D</mi> <mi>W</mi> <mi>b</mi> <mi>l</mi> <mi>u</mi> <mi>e</mi> </mrow> </msub> </mrow> <mo>¯</mo> </mover> <mo>)</mo> </mrow> </semantics></math>, (<b>d</b>) red vs. blue reflectance, (<b>e</b>) ln(red) vs. ln(blue), (<b>f</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">l</mi> <mi mathvariant="normal">n</mi> <mo>(</mo> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> <mo>−</mo> <mover accent="true"> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>O</mi> <mi>D</mi> <mi>W</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mrow> <mo>¯</mo> </mover> <mo>)</mo> </mrow> </semantics></math> vs. <math display="inline"><semantics> <mrow> <mi mathvariant="normal">l</mi> <mi mathvariant="normal">n</mi> <mo>(</mo> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>g</mi> <mi>r</mi> <mi>e</mi> <mi>e</mi> <mi>n</mi> </mrow> </msub> <mo>−</mo> <mover accent="true"> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>O</mi> <mi>D</mi> <mi>W</mi> <mi>g</mi> <mi>r</mi> <mi>e</mi> <mi>e</mi> <mi>n</mi> </mrow> </msub> </mrow> <mo>¯</mo> </mover> <mo>)</mo> </mrow> </semantics></math>, (<b>g</b>) blue vs. red reflectance, (<b>h</b>) ln(blue) vs. ln(red), and (<b>i</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">l</mi> <mi mathvariant="normal">n</mi> <mo>(</mo> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>b</mi> <mi>l</mi> <mi>u</mi> <mi>e</mi> </mrow> </msub> <mo>−</mo> <mover accent="true"> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>O</mi> <mi>D</mi> <mi>W</mi> <mi>b</mi> <mi>l</mi> <mi>u</mi> <mi>e</mi> </mrow> </msub> </mrow> <mo>¯</mo> </mover> <mo>)</mo> </mrow> </semantics></math> vs. <math display="inline"><semantics> <mrow> <mi mathvariant="normal">l</mi> <mi mathvariant="normal">n</mi> <mo>(</mo> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> <mo>−</mo> <mover accent="true"> <mrow> <msub> <mrow> <mi>R</mi> </mrow> <mrow> <mi>O</mi> <mi>D</mi> <mi>W</mi> <mi>r</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> </mrow> <mo>¯</mo> </mover> <mo>)</mo> </mrow> </semantics></math>. The color of the points represents the depth of the pixel point with the corresponding color bar on the right.</p>
Full article ">Figure 16
<p>Accuracy assessment with different bathymetric models.</p>
Full article ">Figure 17
<p>(<b>a</b>) Neural network derived bathymetric map in St. Croix, (<b>b</b>) neural network derived bathymetric map in St. Thomas, (<b>c</b>) band ratio model derived bathymetric map in St. Croix, (<b>d</b>) band ratio model derived bathymetric map in St. Thomas, (<b>e</b>) linear model derived bathymetric map in St. Croix, and (<b>f</b>) linear model derived bathymetric map in St. Thomas.</p>
Full article ">Figure 18
<p>(<b>a</b>) PI-CNN bathymetric map using the uncorrected image in St. Croix; (<b>b</b>) error plot of PI-CNN-estimated depths vs. in situ depths in St. Croix.</p>
Full article ">Figure 19
<p>(<b>a</b>) Absolute error map with different bands combination in St. Croix; (<b>b</b>) the profile of in situ depth, and the result using the uncorrected image and the corrected image. The cross-section line is shown as the green line in (<b>a</b>).</p>
Full article ">Figure 20
<p>Other pretrained PI-CNN-derived bathymetric maps in different sites: (<b>a</b>) St. Croix, (<b>b</b>) St. Thomas.</p>
Full article ">Figure 21
<p>Different reflectance average values within different depth ranges in different sites: (<b>a</b>) St. Croix and (<b>b</b>) St. Thomas.</p>
Full article ">
20 pages, 6882 KiB  
Article
Identification of Vegetation Surfaces and Volumes by Height Levels in Reservoir Deltas Using UAS Techniques—Case Study at Gilău Reservoir, Transylvania, Romania
by Ioan Rus, Gheorghe Șerban, Petre Brețcan, Daniel Dunea and Daniel Sabău
Sustainability 2024, 16(2), 648; https://doi.org/10.3390/su16020648 - 11 Jan 2024
Cited by 1 | Viewed by 1106
Abstract
The hydrophilic vegetation from reservoir deltas sustains rapid expansions in surface and important increases in vegetal mass against a background of a significant influx of alluvium and nutrients from watercourses. It contributes to reservoir water quality degradation and reservoir silting due to organic [...] Read more.
The hydrophilic vegetation from reservoir deltas sustains rapid expansions in surface and important increases in vegetal mass against a background of a significant influx of alluvium and nutrients from watercourses. It contributes to reservoir water quality degradation and reservoir silting due to organic residues. In this paper, we propose an evaluation method of two-dimensional and three-dimensional parameters (surfaces and volumes of vegetation), using the combined photogrammetric techniques from the UAS category. Raster and vector data—high-resolution orthophotoplan (2D), point cloud (pseudo-LIDAR) (3D), points that defined the topographic surface (DTM—Digital Terrain Model (3D) and DSM—Digital Surface Model (3D))—were the basis for the realization of grid products (a DTM and DSM, respectively). After the successive completion of the operations within the adopted workflow (data acquisition, processing, post-processing, and their integration into GIS), after the grid analysis, the two proposed variables (topics) of this research, respectively, the surface of vegetation and its volume, resulted. The data acquisition area (deriving grids with a centimeter resolution) under the conditions of some areas being inaccessible using classical topometric or bathymetric means (low depth, the presence of organic mud and aquatic vegetation, etc.) has an important role in the reservoirs’ depth dynamics and reservoir usage. After performing the calculations in the abovementioned direction, we arrived at results of practical and scientific interest: Cut Volume = 196,000.3 m3, Cut 2D Surface Area = 63,549 m2, Fill Volume = 16.59998 m3, Fill 2D Surface Area = 879.43 m2, Total Volume Between Surfaces = 196,016.9 m3. We specify that this approach does not aim to study the vegetation’s diversity but to determine its dimensional components (surface and volume), whose organic residues participate in mitigating the reservoir functions (water supply, hydropower production, flash flood attenuation capacity, etc.). Full article
(This article belongs to the Special Issue Water Resource Management and Sustainable Environment Development)
Show Figures

Figure 1

Figure 1
<p>General map of the hydrotechnical system of the Upper Someșul Mic (sources, [<a href="#B58-sustainability-16-00648" class="html-bibr">58</a>,<a href="#B59-sustainability-16-00648" class="html-bibr">59</a>]. Upper inset, the Someșul Rece Delta at the level of 2008; lower inset, its location within the Transylvania and Romania territories, and Romania’s position within the European continent [<a href="#B60-sustainability-16-00648" class="html-bibr">60</a>].</p>
Full article ">Figure 2
<p>Components of the used UAS equipment: Phantom 4 Pro drone and control panel.</p>
Full article ">Figure 3
<p>Image residuals for FC 330 (3.61 mm).</p>
Full article ">Figure 4
<p>(<b>a</b>) Mission route; (<b>b</b>) camera locations and image overlap.</p>
Full article ">Figure 5
<p>Camera locations and error estimates (Z error is represented by ellipse color. X and Y errors are represented by ellipse shape. Estimated camera locations are marked with a black dot).</p>
Full article ">Figure 6
<p>(<b>a</b>) Simple point cloud; (<b>b</b>) densified point cloud.</p>
Full article ">Figure 7
<p>Densified point cloud classification.</p>
Full article ">Figure 8
<p>(<b>a</b>) Digital Surface Model; (<b>b</b>) Digital Terrain Model with 1 m equidistance of contours.</p>
Full article ">Figure 9
<p>(<b>a</b>) Orthophotoplan of the study area (0.047 m/pix); (<b>b</b>) 3D raster “sandwich” used for Cartesian analysis of vegetation.</p>
Full article ">Figure 10
<p>(<b>a</b>) NDVI index of study area; (<b>b</b>) confidence map (%) of derived model.</p>
Full article ">Figure 11
<p>Detailed distribution of overlapping profiles in the area of interest.</p>
Full article ">Figure 12
<p>(<b>a</b>) Terrain surface; (<b>b</b>) vegetation surface cropped to the AOI and used for volume calculation.</p>
Full article ">Figure 13
<p>The percentage of the vegetation area and volume with altitudinal differences in the topographic surface of the Gilău Lake delta, according to the amounts from <a href="#sustainability-16-00648-t004" class="html-table">Table 4</a>.</p>
Full article ">
23 pages, 15395 KiB  
Article
Analysis of Depths Derived by Airborne Lidar and Satellite Imaging to Support Bathymetric Mapping Efforts with Varying Environmental Conditions: Lower Laguna Madre, Gulf of Mexico
by Kutalmis Saylam, Alejandra Briseno, Aaron R. Averett and John R. Andrews
Remote Sens. 2023, 15(24), 5754; https://doi.org/10.3390/rs15245754 - 16 Dec 2023
Cited by 2 | Viewed by 1681
Abstract
In 2017, Bureau of Economic Geology (BEG) researchers at the University of Texas at Austin (UT Austin) conducted an airborne lidar survey campaign, collecting topographic and bathymetric data over Lower Laguna Madre, which is a shallow hypersaline lagoon in south Texas. Researchers acquired [...] Read more.
In 2017, Bureau of Economic Geology (BEG) researchers at the University of Texas at Austin (UT Austin) conducted an airborne lidar survey campaign, collecting topographic and bathymetric data over Lower Laguna Madre, which is a shallow hypersaline lagoon in south Texas. Researchers acquired 60 hours of lidar data, covering an area of 1600 km2 with varying environmental conditions influencing water quality and surface heights. In the southernmost parts of the lagoon, in-situ measurements were collected from a boat to quantify turbidity, water transparency, and depths. Data analysis included processing of Sentinel-2 L1C satellite imagery pixel reflectance to classify locations with intermittent turbidity. Lidar measurements were compared to sonar recordings, and results revealed height differences of 5–25 cm where the lagoon was shallower than 3.35 m. Further, researchers analyzed satellite bathymetry at relatively transparent lagoon locations, and the results produced height agreement within 13 cm. The study concluded that bathymetric efforts with airborne lidar and optical satellite imaging have practical limitations and comparable results in large and dynamic shallow coastal estuaries, where in-situ measurements and tide adjustments are essential for height comparisons. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Aerial image of Area-1. The turbidity was low, the lagoon was shallow, and the bottom was visible for observations. (<b>b</b>) Aerial image of Area-3. Turbidity and floating vegetation created challenges for bottom measuring.</p>
Full article ">Figure 2
<p>Lower Laguna Madre in southern Texas and in-situ locations (TCEQ reference stations, NOAA tide gauges, and BEG observation areas). The yellow polygon indicates the extent of the airborne survey area (1600 km<sup>2</sup>).</p>
Full article ">Figure 3
<p>The influence of increasing turbidity on the theoretical bathymetric capability (maximum depth: <span class="html-italic">D<sub>max</sub></span>) of Chiroptera is a linear relationship (<span class="html-italic">R</span><sup>2</sup> = 0.82).</p>
Full article ">Figure 4
<p>Comparison of heights as measured by each Chiroptera scanner. Results indicated a median bias of less than 4 cm measured from an altitude of 500 m.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>d</b>) Pixel reflectance as recorded with Sentinel-2A L1C Band 5, classified using ENVI v5.5 ISODATA algorithm.</p>
Full article ">Figure 6
<p>Lidar bathymetry of the entire lagoon. The mean depth was 0.61 m, and 42.65% of all measurements were between 0.4 and 1.2 m.</p>
Full article ">Figure 7
<p>Surface elevation variation during airborne data acquisition campaign. The mean height was 0.04 m, and the standard deviation was 0.18 m.</p>
Full article ">Figure 8
<p>Lagoon bottom as measured with Chiroptera. 51% percent of the lagoon depths were greater than 0.4 m.</p>
Full article ">Figure 9
<p>(<b>a</b>,<b>b</b>). In Area-1, lidar and sonar heights produced a linear agreement (<span class="html-italic">R</span><sup>2</sup> = 0.68), indicating higher turbidity levels in depths greater than 1 m. The sonar-recorded slightly deeper compared to lidar measurements (mean difference = 5 cm).</p>
Full article ">Figure 10
<p>(<b>a</b>,<b>b</b>). In Area-2, turbidity was high in depths shallower than 1 m (mean = 8.6 NTU), scattering and attenuating lidar pulses, influencing the correspondence between lidar and sonar measurements adversely (<span class="html-italic">R</span><sup>2</sup> = 0.38). The mean height difference was 14 cm.</p>
Full article ">Figure 11
<p>(<b>a</b>,<b>b</b>): In Area-3, turbidity was the highest (mean = 10.5 NTU) and increased with depth. Fewer sonar and lidar measurements were matched (8%) because of loose algorithm thresholds. The mean height difference increased to 25 cm, improving the regression, and generating a bi-modal distribution (<span class="html-italic">R</span><sup>2</sup> = 0.71).</p>
Full article ">Figure 12
<p>(<b>a</b>,<b>b</b>). SDB versus lidar bathymetry in 2017 in Area-1. Measurement differences produced a skewed distribution and SDB measured deeper than lidar (mean difference &lt; 13 cm).</p>
Full article ">Figure 13
<p>(<b>a</b>,<b>b</b>). SDB versus lidar depths in 2017 in Area-2. Turbidity increased, and the confidence between the measurements declined (&lt;61%), particularly in areas deeper than 1 m.</p>
Full article ">Figure A1
<p>Class 0. Synthetic water surface is interpolated using a proprietary algorithm. A strong NIR backscatter surface peak estimates the elevation of the water surface, confirmed by the immediate peaks in the green wavelength.</p>
Full article ">Figure A2
<p>Class 5. Water surface represented by the first strong backscatter peak from the green wavelength waveform, without the use of an NIR channel.</p>
Full article ">Figure A3
<p>Class 7. Last return originating from a reflective backscatter surface (or the bottom) in the water column and calculated by a strong peak in the waveform.</p>
Full article ">Figure A4
<p>Class 10. A reflective surface (or bottom) detected in the water column and calculated using peaks of lower amplitude than those used in Class 7.</p>
Full article ">
Back to TopTop