[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,649)

Search Parameters:
Keywords = mapping lidar

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4642 KiB  
Article
Estimating the Potential for Rooftop Generation of Solar Energy in an Urban Context Using High-Resolution Open Access Geospatial Data: A Case Study of the City of Tromsø, Norway
by Gareth Rees, Liliia Hebryn-Baidy and Clara Good
ISPRS Int. J. Geo-Inf. 2025, 14(3), 123; https://doi.org/10.3390/ijgi14030123 - 7 Mar 2025
Abstract
An increasing trend towards the installation of photovoltaic (PV) solar energy generation capacity is driven by several factors including the desire for greater energy independence and, especially, the desire to decarbonize industrial economies. While large ‘solar farms’ can be installed in relatively open [...] Read more.
An increasing trend towards the installation of photovoltaic (PV) solar energy generation capacity is driven by several factors including the desire for greater energy independence and, especially, the desire to decarbonize industrial economies. While large ‘solar farms’ can be installed in relatively open areas, urban environments also offer scope for significant energy generation, although the heterogeneous nature of the surface of the urban fabric complicates the task of forming an area-wide view of this potential. In this study, we investigate the potential offered by publicly available airborne LiDAR data, augmented using data from OpenStreetMap (OSM), to estimate rooftop PV generation capacities from individual buildings and regionalized across an entire small city. We focus on the island of Tromsøya in the city of Tromsø, Norway, which is located north (69.6° N) of the Arctic Circle, covers about 13.8 km2, and has a population of approximately 42,800. A total of 16,377 buildings were analyzed. Local PV generation potential was estimated between 120 and 180 kWh m−2 per year for suitable roof areas, with a total estimated generation potential of approximately 200 GWh per year, or approximately 30% of the city’s current total consumption. Regional averages within the city show significant variations in potential energy generation, highlighting the importance of roof orientation and building density, and suggesting that rooftop PV could play a much more substantial role in local energy supply than is commonly assumed at such high latitudes. The analysis method developed here is rapid, relatively simple, and easily adaptable to other locations. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Study area location, showing the island of Tromsøya where most parts of the city of Tromsø are located. Our analysis is confined to the island area. (<b>b</b>) The geographical location of Tromsø in Norway is marked with a red dot.</p>
Full article ">Figure 2
<p>Geographical cloudiness function <span class="html-italic">f</span> defined for Europe, calculated using Equation (2) and PVGIS-SARAH2 estimates of average horizontal insolation between 2005 and 2020. The value of <span class="html-italic">f</span> at the location of Tromsø is taken as 0.4. The apparent discontinuity in values at latitude 65° is due to the joining of two different sources of solar irradiance data (SARAH2 in the southerly part, ERA5 in the northerly part).</p>
Full article ">Figure 3
<p>Estimating unusable parts of roofs. (<b>a</b>,<b>c</b>) show image extracts with small and large buildings, respectively, while (<b>b</b>,<b>d</b>) show the corresponding areas after applying the roof roughness filter defined by Equation (8). The variable represented in these raster images is the value of the annual estimated area-specific PV generation, <span class="html-italic">E</span><sub>2</sub>.</p>
Full article ">Figure 4
<p>Integrated workflow—combining insolation modeling (<b>left</b>) with LiDAR-based image processing (<b>right</b>) used to estimate annual rooftop PV potential and identify usable roof surfaces. Colors and symbols are used to differentiate different data types, as specified in the legend.</p>
Full article ">Figure 5
<p>(<b>a</b>) Local area-specific rooftop solar PV generation potential for Tromsøya, estimated using the algorithm developed in this work, (<b>b</b>) enlarged view of part of (<b>a</b>).</p>
Full article ">Figure 6
<p>Area-specific solar PV generation potential. Smoothing radius = 250 m.</p>
Full article ">Figure 7
<p>Area-specific generation potential as a function of roof slope angle for all buildings on Tromsøya. Each point represents a single roof. The regression line is the best-fitting quadratic variation.</p>
Full article ">
19 pages, 6875 KiB  
Article
Estimation of Forest Canopy Height Using ATLAS Data Based on Improved Optics and EEMD Algorithms
by Guanran Wang, Ying Yu, Mingze Li, Xiguang Yang, Hanyuan Dong and Xuebing Guan
Remote Sens. 2025, 17(5), 941; https://doi.org/10.3390/rs17050941 - 6 Mar 2025
Viewed by 112
Abstract
The Ice, Cloud, and Land Elevation 2 (ICESat-2) mission uses a micropulse photon-counting lidar system for mapping, which provides technical support for capturing forest parameters and carbon stocks over large areas. However, the current algorithm is greatly affected by the slope, and the [...] Read more.
The Ice, Cloud, and Land Elevation 2 (ICESat-2) mission uses a micropulse photon-counting lidar system for mapping, which provides technical support for capturing forest parameters and carbon stocks over large areas. However, the current algorithm is greatly affected by the slope, and the extraction of the forest canopy height in the area with steep terrain is poor. In this paper, an improved algorithm was provided to reduce the influence of topography on canopy height estimation and obtain higher accuracy of forest canopy height. First, the improved clustering algorithm based on ordering points to identify the clustering structure (OPTICS) algorithm was developed and used to remove the noisy photons, and then the photon points were divided into canopy photons and ground photons based on mean filtering and smooth filtering, and the pseudo-signal photons were removed according to the distance between the two photons. Finally, the photon points were classified and interpolated again to obtain the canopy height. The results show that the improved algorithm was more effective in estimating ground elevation and canopy height, and the result was better in areas with less noise. The root mean square error (RMSE) values of the ground elevation estimates are within the range of 1.15 m for daytime data and 0.67 m for nighttime data. The estimated RMSE values for vegetation height ranged from 3.83 m to 2.29 m. The improved algorithm can provide a good basis for forest height estimation, and its DEM and CHM accuracy improved by 36.48% and 55.93%, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>The graph on the left represents the study area. The upper picture on the right shows the forest cover in the area shown and the lower figure shows the appearance of the drone sensor used in this experiment.</p>
Full article ">Figure 2
<p>Schematic diagram of the ICESat-2/ATLAS track.</p>
Full article ">Figure 3
<p>Search for a neighborhood shape diagram. The circle and the ellipse represent the original neighborhood shape and the improved neighborhood shape, respectively.</p>
Full article ">Figure 4
<p>Core distance and reachability distance schematic diagram. The red arrow indicates the distance from the center point to different locations. The circle and the ellipse represent the original neighborhood shape and the improved neighborhood shape, respectively.</p>
Full article ">Figure 5
<p>The decomposition results of initial ground photons using EMD. (<b>a</b>) Original signal; (<b>b</b>) IMF 1; (<b>c</b>) IMF 2; (<b>d</b>) IMF 3; (<b>e</b>) IMF 4; (<b>f</b>) IMF 5; (<b>g</b>) IMF 6; (<b>h</b>) IMF 7; (<b>i</b>) IMF 8; (<b>j</b>) the final residual.</p>
Full article ">Figure 6
<p>Canopy photon recognition algorithm flow.</p>
Full article ">Figure 7
<p>The final results of noise-removal algorithm (low signal-to-noise ratio, complex terrain).</p>
Full article ">Figure 8
<p>The final results of noise-removal algorithm (high signal-to-noise ratio, relatively uncomplicated terrain).</p>
Full article ">Figure 9
<p>The final ground photon extraction and the ground surface generation.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>d</b>) The DSM and DEM results for high signal-to-noise ratio and the DSM and DEM results for low signal-to-noise ratio, respectively.</p>
Full article ">Figure 11
<p>(<b>a1</b>–<b>a3</b>) The DEM, DSM, and CHM error box plots in the high signal-to-noise ratio region and (<b>b1</b>–<b>b3</b>) the DEM, DSM, and CHM error box plots in the low signal-to-noise ratio region, respectively. I stands for improved algorithm and O stands for original algorithm.</p>
Full article ">Figure 11 Cont.
<p>(<b>a1</b>–<b>a3</b>) The DEM, DSM, and CHM error box plots in the high signal-to-noise ratio region and (<b>b1</b>–<b>b3</b>) the DEM, DSM, and CHM error box plots in the low signal-to-noise ratio region, respectively. I stands for improved algorithm and O stands for original algorithm.</p>
Full article ">Figure 12
<p>Scatter plot of the change in the orbital direction of the residual extension satellite due to slope.</p>
Full article ">Figure 13
<p>Scatter plot of orbital direction change in residual extension satellite affected by forest cover.</p>
Full article ">
44 pages, 14026 KiB  
Review
Coastal Environments: LiDAR Mapping of Copper Tailings Impacts, Particle Retention of Copper, Leaching, and Toxicity
by W. Charles Kerfoot, Gary Swain, Robert Regis, Varsha K. Raman, Colin N. Brooks, Chris Cook and Molly Reif
Remote Sens. 2025, 17(5), 922; https://doi.org/10.3390/rs17050922 - 5 Mar 2025
Viewed by 189
Abstract
Tailings generated by mining account for the largest world-wide waste from industrial activities. As an element, copper is relatively uncommon, with low concentrations in sediments and waters, yet is very elevated around mining operations. On the Keweenaw Peninsula of Michigan, USA, jutting out [...] Read more.
Tailings generated by mining account for the largest world-wide waste from industrial activities. As an element, copper is relatively uncommon, with low concentrations in sediments and waters, yet is very elevated around mining operations. On the Keweenaw Peninsula of Michigan, USA, jutting out into Lake Superior, 140 mines extracted native copper from the Portage Lake Volcanic Series, part of an intercontinental rift system. Between 1901 and 1932, two mills at Gay (Mohawk, Wolverine) sluiced 22.7 million metric tonnes (MMT) of copper-rich tailings (stamp sands) into Grand (Big) Traverse Bay. About 10 MMT formed a beach that has migrated 7 km from the original Gay pile to the Traverse River Seawall. Another 11 MMT are moving underwater along the coastal shelf, threatening Buffalo Reef, an important lake trout and whitefish breeding ground. Here we use remote sensing techniques to document geospatial environmental impacts and initial phases of remediation. Aerial photos, multiple ALS (crewed aeroplane) LiDAR/MSS surveys, and recent UAS (uncrewed aircraft system) overflights aid comprehensive mapping efforts. Because natural beach quartz and basalt stamp sands are silicates of similar size and density, percentage stamp sand determinations utilise microscopic procedures. Studies show that stamp sand beaches contrast greatly with natural sand beaches in physical, chemical, and biological characteristics. Dispersed stamp sand particles retain copper, and release toxic levels of dissolved concentrations. Moreover, copper leaching is elevated by exposure to high DOC and low pH waters, characteristic of riparian environments. Lab and field toxicity experiments, plus benthic sampling, all confirm serious impacts of tailings on aquatic organisms, supporting stamp sand removal. Not only should mining companies end coastal discharges, we advocate that they should adopt the UNEP “Global Tailings Management Standard for the Mining Industry”. Full article
(This article belongs to the Special Issue GIS and Remote Sensing in Ocean and Coastal Ecology)
Show Figures

Figure 1

Figure 1
<p><span class="html-italic">Geographic location of Grand (Big) Traverse Bay (red to green contours) along the eastern shoreline of the Keweenaw Peninsula</span>. On the Peninsula, early copper mines are indicated by black dots within the Portage Lake Volcanic Series (dashed lines) and large stamp mills by stars. Two mills (Wolverine and Mohawk) are located near Gay. Insert shows anthropogenic copper inventory “halo” around the Peninsula, in µg/cm<sup>2</sup> copper (modified from [<a href="#B37-remotesensing-17-00922" class="html-bibr">37</a>]).</p>
Full article ">Figure 2
<p><span class="html-italic">Gay Stamp Sand Pile</span>. (<b>a</b>) Wooden discharge launder distributing tailings onto the Gay Pile, around 1922, with smaller sluices conveying stamp sand and slime clays laterally (courtesy MTU Archives). (<b>b</b>) Photo of 6–8 m high stamp sand bluffs in July 2008 off Gay, showing a buried small lateral sluiceway protruding out of the pile along the shoreline. Lake Superior waters are to the right; the dark beach sands are stamp sands with intermixed slime clay layers. (<b>c</b>) Bluff photo from about the same location in 2019, when shoreline erosion (ca. 7–8 m/year) reached the buried launder support beams, just before bluff removal. (B and C photos, W.C. Kerfoot).</p>
Full article ">Figure 3
<p><span class="html-italic">Stamp Sands</span> in situ <span class="html-italic">under natural sunlight</span>: (<b>a</b>) wet, redeposited Gay stamp sand beach deposits close-up (12.5 cm wide field), showing coloured crushed gangue mineral grains and (<b>b</b>) from a distance, with a lens cap (6 cm) for scale, tailings appear as a dark grey (low albedo), coarse-grained (2–4 mm), sand-sized beach deposit (courtesy Bob Regis).</p>
Full article ">Figure 4
<p><span class="html-italic">Wolverine and Mohawk Mills at Gay, Michigan, around the late 1920s to early 1930s</span>: (<b>a</b>) Railroad (Gay) frontside of mill complex, showing railroad station and where rails led up to the top floor of each mill. (<b>b</b>) Backside view of each mill. Steam-driven stamps crushed the rock, and an assortment of jigs and tables used water from Lake Superior on different floors to separate out denser copper-rich particles into concentrates shipped to smelters. The slime clay and stamp sand fractions were sluiced out onto a pile behind the two mills (MTU Archives).</p>
Full article ">Figure 5
<p><span class="html-italic">Grand (Big) Traverse Bay</span>: 2010 LiDAR DEM (digital elevation model) colour-coded by elevation and water depth (depth scale to right). Red horizontal contour lines are at 5 m depth intervals. Gay tailings pile (“original pile”) is indicated, as well as migrating underwater stamp sand bars dropping into an ancient river channel (the Trough; at locations #1, and #5). On the eastern flanks of Buffalo Reef, stamp sands are moving out of the Trough into cobble/boulder fields (#3, #4). Along the western edges, stamp sands have migrated as a beach deposit to the Traverse River Seawall (#8) and are slipping down into an underwater depression (#7) next to cobble/boulder fields. Stamp sands are also moving around the harbour outlet (#8). Hence, both the eastern and western sides of Buffalo Reef are experiencing tailings encroachment. Lower on the reef (#2, #6), there is little contamination. Past the Traverse River, the sands in the southern bay are almost exclusively natural quartz grains (#9), forming a white beach with shoreline cusp-like features and bar, plus ridges (#10) of natural sand moving from the shelf into deeper waters off the bay and into Lake Superior (modified from [<a href="#B58-remotesensing-17-00922" class="html-bibr">58</a>,<a href="#B59-remotesensing-17-00922" class="html-bibr">59</a>]).</p>
Full article ">Figure 6
<p><span class="html-italic">LiDAR Details of Bay Substrate Types, plus Stamp Sand Movement using DEM Differences</span>: (<b>a</b>) A 2016 LiDAR bathymetric DEM broken into dominant surface substrate types. Coloured points indicate Ponar sampling sites and percentage of stamp sands across dominant surface substrate types (SS, stamp sand; NS, nature quartz sand; CBL cobble &amp; boulders; BD, bedrock). Black dots indicate where the Ponar dredge was unable to capture a substrate sample, bouncing off bedrock. (<b>b</b>) Superimposed outline of Buffalo Reef boundaries with LiDAR-difference estimates (2008–2016) of erosion at Gay Pile shoreline (red) and deposition of underwater bars (blue) towards and into the Trough. One Tg (terragram) is equivalent to one million metric tonnes (MMT). These DEMs aided planning for part of Stage 1 remediation (dredging of Traverse River Harbour; Trough). Buffalo Reef boundaries are indicated by the thick black line.</p>
Full article ">Figure 7
<p>For LiDAR, two laser pulses (blue-green 532 nm and near-IR 1064 nm) sweep across the lake surface: (<b>a</b>) The near-IR reflects off the water surface, whereas the blue-green penetrates through the water column and reflects off the lakebed. The difference between the two returning pulses gives the depth of the water column and details of bathymetry (modified from LeRocque and West [<a href="#B63-remotesensing-17-00922" class="html-bibr">63</a>]); (<b>b</b>) Simulated LiDAR waveform fitted with Gaussian function (water surface peak), a triangle function (water column reflectance), and a Weibull function for bottom reflectance (after Abdallah et al. [<a href="#B64-remotesensing-17-00922" class="html-bibr">64</a>]).</p>
Full article ">Figure 8
<p>Examples of MTRI UAS drone options: Bergen Hexacopter and Quad-8, assorted small (UAS) quadcopters (e.g., DJI Mavic Pro, DJI Phantom 3A) in left blue panel. Example sensors carried by drone platforms are shown in right red panel.</p>
Full article ">Figure 9
<p><span class="html-italic">Microscope Grain Counting Statistics</span>: (<b>a</b>) Sample of sand grains from the Sand Point site in lower Keweenaw Bay (ca. 55% stamp sands) under transmitted light from a microscope, showing the contrast between rounded natural sand (transparent quartz) and dark sub-angular stamp sand grains (dark, irregular edges, slightly larger). (<b>b</b>) Size frequency distributions for the two particle types (stamp sand, red; natural quartz sand, blue) from the Sand Point site. (<b>c</b>) The observed grain counts (mixture of natural sand, and stamp sand) appear to follow a binomial distribution. (<b>d</b>) The Coefficient of Variation (CV) for the %SS calculation is predicted from the equation under “Theoretical”. Field counts (see “Observed”, <b>left</b>) correspond generally to expected values. Over the interval from 10 to 90% SS, the predicted CV (<b>right</b>) is between 15% to 2% (ca. mean of 5%).</p>
Full article ">Figure 10
<p><span class="html-italic">Daphnia pulex</span> survivorship and fecundity experiment in stamp sand ponds at Gay. Forty 40 mL vials had one adult <span class="html-italic">Daphnia</span> in each container and were submersed in shallow water of the ponds. Each vial had a 100 µm mesh Nitex netting over the top, secured by rubber bands. A temperature probe (STOW AWAY-IS Model’ Onset Computer Corporation). was placed near the set to check daily temperature fluctuations during the experiments.</p>
Full article ">Figure 11
<p><span class="html-italic">UAS Traverse River Seawall Drone Studies</span>. Above, Traverse River Harbor, showing stamp sand overtopping the Army Corps Seawall (2019). Notice highly stained (natural high DOC, low pH) water moving out of the river. Shoreline water depth descends sharply off the grey stamp sand beach to the right, whereas the natural white sand (quartz) beach to the left has a shallower nearshore draft with an offshore bar. Some stamp sand lenses (dark) have crossed over onto the white sand beach margin. Initial drone UAS Orthomosaic Survey is in the middle right (white, Hunter King, MI EGLE) from late 2019. MTRI 2019 Digital Elevation Model (DEM); artificially coloured and hill-shaded, is in bottom left, contoured from GPS Ellipsoidal Height. Elevation profiles along cross-section transect lines (bottom left, 1–5), with corresponding elevation profiles drawn on bottom right. Detailed contouring emphasises the increased width and vertical height of stamp sand accumulating north of the harbour seawall, leading to over-topping (Colin Brooks, MTRI).</p>
Full article ">Figure 12
<p>Shoreline Details, contrasting elevations and beach features north and south of the Traverse River Harbor: (<b>a</b>) Shoreline elevation 2019 profiles north (stamp sand; brown) and south (natural sand beach; yellow) of Traverse River Harbor relative to 2019 average sea level (flat horizontal line). Tree line is at 0 on x-axis. (<b>b</b>) Enlargement of 2016 LiDAR underwater shallow cusp structures and offshore bar south of the Traverse River. Notice lack of circular cusp structures north of the harbour, off the stamp sand beach. (courtesy Bob Regis and Christina Eddleman).</p>
Full article ">Figure 13
<p>(<b>a</b>) <span class="html-italic">Army Corps initial remediation plans and implementation steps (dredging &amp; berm placement)</span>: (<b>a</b>) Dredging and excavation of stamp sand from the Traverse River Harbor (2017-21) and Trough (2020-21) followed by deposition into the Berm Complex. Removal of stamp sand north of the Seawall (orange site) was later enlarged from 50’ to around 500’. (courtesy U.S. Army Corps, Detroit. (<b>b</b>) Initial dredging begins at the Traverse River Harbor (<b>top</b>, fall of 2017); 5–7 km of plastic pipes (<b>middle</b>) and pumping stations (<b>bottom</b>) used to transport stamp sand from the Traverse Harbor and Trough to the Berm Complex (2019–2021). Shovel (<b>middle right</b>) used during berm wall construction. (photos W.C. Kerfoot).</p>
Full article ">Figure 13 Cont.
<p>(<b>a</b>) <span class="html-italic">Army Corps initial remediation plans and implementation steps (dredging &amp; berm placement)</span>: (<b>a</b>) Dredging and excavation of stamp sand from the Traverse River Harbor (2017-21) and Trough (2020-21) followed by deposition into the Berm Complex. Removal of stamp sand north of the Seawall (orange site) was later enlarged from 50’ to around 500’. (courtesy U.S. Army Corps, Detroit. (<b>b</b>) Initial dredging begins at the Traverse River Harbor (<b>top</b>, fall of 2017); 5–7 km of plastic pipes (<b>middle</b>) and pumping stations (<b>bottom</b>) used to transport stamp sand from the Traverse Harbor and Trough to the Berm Complex (2019–2021). Shovel (<b>middle right</b>) used during berm wall construction. (photos W.C. Kerfoot).</p>
Full article ">Figure 14
<p><span class="html-italic">Drone photo of Berm Complex (2021)</span>. In the stamp sand Pond Field southwest of Gay (stack site), berm walls were constructed from local stamp sand. Plastic pipes carried in dredged stamp sands from the Traverse River Harbor and Trough. The darker reddish-brown sediments are stamp sands in the Pond Field beach, whereas the lighter pink and orange sediments are recently deposited dredged spoils within the berm walls (2020–2021). Notice water percolating through berm walls into bordering ponds. The outer shoreline thickening is also part of a “revetment-like” stamp sand addition, intended to protect the Berm Complex from enhanced shoreline erosion. (drone photo by MDNR).</p>
Full article ">Figure 15
<p>UAS high-resolution drone elevation and bathymetry surveys (base map 9 August 2022) of shoreline retreat at the original Gay pile location after bluff removal. Overlays along the beach edge trace shorelines in 2009, 2016, and 2022. The 78 m retreat over 6 years (2016–2022); equates to a 13 m/yr rate. The previous, nearly constant, long-term retreat rate prior to 2009 averaged 7.9 m/yr (ca. 26′) [<a href="#B7-remotesensing-17-00922" class="html-bibr">7</a>,<a href="#B57-remotesensing-17-00922" class="html-bibr">57</a>]. The original Jacobsville Sandstone shoreline, before stamp sands were discharged, is marked by the red border in the far-left upper region. Note white concrete basements of the two mills and remnants of both wooden and broken concrete launders in the northern region. Environmental recovery is beginning, as benthic organisms and fish are returning to clear underwater stretches of the bedrock shelf, where waves have removed stamp sands. Scattered trees (many birch) are beginning to colonise what is left of the original Gay Pile surface.</p>
Full article ">Figure 16
<p><span class="html-italic">Particle sizes and Cu concentrations</span>: (<b>a</b>) Mean particle sizes are plotted across Grand (Big) Traverse Bay and into Little Traverse Bay, to the southwest. Legend for mean particle size is in upper left, distances in lower right. The mean particle sizes for stamp sand (basalt) beaches are slightly larger when compared with natural (quartz) beach grains. Underwater, across the coastal shelf and into deep-water sediments, there is a major particle size reduction related to water depth. (<b>b</b>) Directly measured mean Cu concentrations in bay sediments (ppm; legend in upper left; largely AEM Group data). Values are only from the surface level of beach sands, underwater shelf, and deep-water Ponar sediment samples. (Plots by Gary Swain).</p>
Full article ">Figure 17
<p><span class="html-italic">Ponar Data Maps</span>: (<b>a</b>) Dispersal of Stamp Sands in Grand (Big) Traverse Bay surface sediments. The percentage of stamp sand (% SS) in underwater sand mixtures is colour-coded (legends in the upper left). Dots indicate Polar sampling sites. Maximum values occur between the southwestern edge of the original Gay pile to the Coal Dock region (including migrating underwater stamp sand bars and fields, and deposition into northern Trough regions). Modest stamp sand percentages also extend down to the Traverse River Harbor and spread offshore. Percentage contours suggest that stamp sands are now moving around the Army Corps Seawall into the Lower Bay. (<b>b</b>) Depression of benthic invertebrates in surface Ponar samples across the bay. Density of macroinvertebrates (low densities are in deep red) plotted across the bay and on Buffalo Reef. Densities are most impacted near high % SS and Cu-rich regions between the Pond Field and Coal Dock Regions, but are recovering off the Gay Pile. Impacts are also evident off the Traverse Harbor Region. Reduced benthic densities appear more extensive across the reef than originally anticipated from just percentage stamp sand plots (modified from [<a href="#B30-remotesensing-17-00922" class="html-bibr">30</a>]).</p>
Full article ">Figure 18
<p><span class="html-italic">Copper concentrations versus percentage stamp sand</span>. (<b>a</b>) Means of Cu concentration at 10% stamp sand (SS) intervals for the entire AEM Group set (N = 132). Linear regression equation is Y = 17.838X + 272, R<sup>2</sup> = 0.812, r = 0.901. The 100% SS intercept would be at 2056 ppm Cu. (<b>b</b>) Mean Cu concentration plotted against mean % SS for “all samples” (underwater Ponar and cores plus beach cores) under 50% SS. Regression equation is Y = 28.699X − 18; R<sup>2</sup> = 0.475, r = 0.689. The 100% SS intercept would be 2852 ppm. (<b>c</b>) Mean copper concentration plotted against mean % SS for “on land” (beach) samples under 50% SS. Linear regression equation is Y = 33.019X + 38; R<sup>2</sup> = 0.610, r = 0.781. The 100%SS intercept would be 3340 ppm.</p>
Full article ">Figure 19
<p><span class="html-italic">Leaching Experiments</span>. (<b>a</b>) Example of Six-cycle ERDC-EL prolonged leaching experiments (Gay Pile), red and blue lines are duplicate runs. Leached copper in mg/L (ppm); chronic levels (WQC) are at 0.009 mg/L, so all values are above chronic levels. Notice that the first leaching releases the most copper (130–120 ppb), although later releases vary between an additional 40–20 ppb dissolved Cu released (accumulative total = ca. 250 ppb). (<b>b</b>) Plot of leaching at pH 7 versus leaching at pH 7 with 20 mg/L DOC. Differences without and with DOC were significant (<span class="html-italic">t</span>-test; <span class="html-italic">p</span> &lt; 0.05) (from [<a href="#B61-remotesensing-17-00922" class="html-bibr">61</a>]), illustrating enhanced release of copper in presence of DOC.</p>
Full article ">Figure 20
<p><span class="html-italic">Daphnia survival and fecundity in copper-rich beach waters</span>: (<b>a</b>) <span class="html-italic">Daphnia pulex</span> survival and fecundity at the Control site, in Portage Lake water, off the Great Lakes Research Center (GLRC) dock. Survival percentage (97.5%) is based on forty vials. The 100 µm mesh allowed local waters, phytoplankton, and nutrients into vials, but prevented predators and escape of <span class="html-italic">Daphnia</span>. The accumulative number of juveniles born is also plotted against time (295 young). (<b>b</b>) <span class="html-italic">Daphnia pulex</span> survival and fecundity in 4 vial racks suspended in separate Gay stamp sand ponds. Again, survival percentage is for adults in 40 vials. In contrast to the Control (Portage Lake), there was no viable production of young. Moreover, adults survived for only 2–3 days, see X-axis. Again, the design was identical to the Control, as vials were covered by a 100 µm mesh that allowed local waters, phytoplankton, and nutrients in, but prevented predation and escape of the enclosed <span class="html-italic">Daphnia</span>.</p>
Full article ">
22 pages, 7364 KiB  
Article
Vegetation Structure and Distribution Across Scales in a Large Metropolitan Area: Case Study of Austin MSA, Texas, USA
by Raihan Jamil, Jason P. Julian and Meredith K. Steele
Geographies 2025, 5(1), 11; https://doi.org/10.3390/geographies5010011 - 3 Mar 2025
Viewed by 202
Abstract
The spatial distribution of vegetation across metropolitan areas is important for wildlife habitat, air quality, heat mitigation, recreation, and other ecosystem services. This study investigated relationships between vegetation patterns and parcel characteristics at multiple scales of the Austin Metropolitan Statistical Area (MSA), a [...] Read more.
The spatial distribution of vegetation across metropolitan areas is important for wildlife habitat, air quality, heat mitigation, recreation, and other ecosystem services. This study investigated relationships between vegetation patterns and parcel characteristics at multiple scales of the Austin Metropolitan Statistical Area (MSA), a rapidly growing region in central Texas characterized by diverse biophysical and socioeconomic landscapes. We used LiDAR data to map vegetation types and distributions across a 6000 km2 study area. Principal component analysis (PCA) and regression models were employed to explore tree, shrub, and grass cover across parcels, cities, and the MSA, considering home value, age, size, and distance to the city center. At the MSA scale, tree and shrub cover were higher in the Edwards Plateau than in the Blackland Prairie ecoregion. Tree cover increased with parcel size and home value, especially in suburban areas. Older parcels had more mature trees, though less so in the grass-dominated Blackland Prairie. Shrub cover was higher on larger parcels in the Edwards Plateau, while the Blackland Prairie showed the opposite trend. PCA explained 60% of the variance, highlighting links between vegetation and urban development. Our findings reveal how biophysical and socioeconomic factors interact to shape vegetation, offering considerations for land use, housing, and green infrastructure planning. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area covering the Austin Metropolitan Statistical Area (MSA), including Austin in the center and nine other cities. The MSA lays on the border of an ecoregion boundary (yellow line), with the Edwards Plateau (EP) to the west and the Blackland Prairie (BP) to the east.</p>
Full article ">Figure 2
<p>An example neighborhood in Austin, Texas, USA, that shows the overlay of individual parcel boundaries on vegetation classes (tree, shrub, and grass) derived from a LiDAR-based canopy height model (CHM).</p>
Full article ">Figure 3
<p>Austin MSA vegetation map (grass, shrub, and tree cover) derived from the canopy height model (CHM) for the year 2020. Statistical distributions of vegetation cover in the right margin comparing the Edwards Plateau (EP) ecoregion to the west and the Blackland Prairie (BP) ecoregion to the east. An unpaired t-test was used for normally distributed variables (grass cover, shrub cover, and median tree height), while the Mann–Whitney test was applied to zero-inflated distributions (tree cover).</p>
Full article ">Figure 4
<p>Principal component analysis (PCA) of vegetation metrics and parcel characteristics across cities (first letter) and ecoregions (second letter and symbol) in the Austin MSA.</p>
Full article ">
19 pages, 4281 KiB  
Article
Rapid Target Extraction in LiDAR Sensing and Its Application in Rocket Launch Phase Measurement
by Xiaoqi Liu, Heng Shi, Meitu Ye, Minqi Yan, Fan Wang and Wei Hao
Appl. Sci. 2025, 15(5), 2651; https://doi.org/10.3390/app15052651 - 1 Mar 2025
Viewed by 218
Abstract
The paper presents a fast method for 3D point cloud target extraction, addressing the challenge of time-consuming processing in LiDAR-based 3D point cloud data. The method begins with the acquisition of environmental 3D point cloud data using LiDAR, which is then projected onto [...] Read more.
The paper presents a fast method for 3D point cloud target extraction, addressing the challenge of time-consuming processing in LiDAR-based 3D point cloud data. The method begins with the acquisition of environmental 3D point cloud data using LiDAR, which is then projected onto a 2D cylindrical map. We propose a method for rapid target extraction from LiDAR-based 3D point cloud data, which includes key steps such as projection into 2D space, image processing for segmentation, and target extraction. A mapping matrix between the 2D grayscale image and the cylindrical projection is derived through Gaussian elimination. A target backtracking search algorithm is used to map the extracted target region back to the original 3D point cloud, enabling precise extraction of the 3D target points. Near-field experiments using hybrid solid-state LiDAR demonstrate the method’s effectiveness, requiring only 0.53 s to extract 3D target point clouds from datasets containing hundreds of thousands of points. Further, far-field rocket launch experiments show that the method can extract target point clouds within 158 milliseconds, with measured positional offsets of 0.2159 m and 0.1911 m as the rocket moves away from the launch tower. Full article
Show Figures

Figure 1

Figure 1
<p>Cylindrical projection schematic.</p>
Full article ">Figure 2
<p>Schematic diagram of the grayscale map conversion.</p>
Full article ">Figure 3
<p>(<b>a</b>) Grayscale image; (<b>b</b>) the filtered grayscale image; (<b>c</b>) edge map; (<b>d</b>) surface diagram; (<b>e</b>) connected block diagram; and (<b>f</b>) target section.</p>
Full article ">Figure 4
<p>Schematic diagram of 3D target extraction.</p>
Full article ">Figure 5
<p>LiDAR field installation diagram.</p>
Full article ">Figure 6
<p>Raw point cloud data collected by LiDAR.</p>
Full article ">Figure 7
<p>(<b>a</b>) Grayscale image (Scattergram); (<b>b</b>) grayscale image; (<b>c</b>) the filtered grayscale image; (<b>d</b>) edge map; (<b>e</b>) surface diagram; (<b>f</b>) connected block diagram; and (<b>g</b>) target section.</p>
Full article ">Figure 8
<p>(<b>a</b>) Grayscale image target; (<b>b</b>) cylindrical projection target; (<b>c</b>) 3D point cloud targets (including edge clutter); and (<b>d</b>) 3D point cloud target.</p>
Full article ">Figure 9
<p>Target extraction relative error chart.</p>
Full article ">Figure 10
<p>Rocket point cloud measurement test scene diagram.</p>
Full article ">Figure 11
<p>(<b>a</b>) Rocket raw point cloud map; (<b>b</b>) cylindrical projection diagram; (<b>c</b>) 2D grayscale image; (<b>d</b>) rocket point cloud target; and (<b>e</b>) rocket 3D point cloud.</p>
Full article ">Figure 12
<p>Three-dimensional trajectory of rocket vertical launch phase.</p>
Full article ">
18 pages, 6388 KiB  
Article
Optimizing Stacked Ensemble Machine Learning Models for Accurate Wildfire Severity Mapping
by Linh Nguyen Van and Giha Lee
Remote Sens. 2025, 17(5), 854; https://doi.org/10.3390/rs17050854 - 28 Feb 2025
Viewed by 208
Abstract
Wildfires are increasingly frequent and severe, posing substantial risks to ecosystems, communities, and infrastructure. Accurately mapping wildfire severity (WSM) using remote sensing and machine learning (ML) is critical for evaluating damages, informing recovery efforts, and guiding long-term mitigation strategies. Stacking ensemble ML (SEML) [...] Read more.
Wildfires are increasingly frequent and severe, posing substantial risks to ecosystems, communities, and infrastructure. Accurately mapping wildfire severity (WSM) using remote sensing and machine learning (ML) is critical for evaluating damages, informing recovery efforts, and guiding long-term mitigation strategies. Stacking ensemble ML (SEML) enhances predictive accuracy and robustness by combining multiple diverse models into a single meta-learned predictor. This approach leverages the complementary strengths of individual base learners while reducing variance, ultimately improving model reliability. This study aims to optimize a SEML framework to (1) identify the most effective ML models for use as base learners and meta-learners, and (2) determine the optimal number of base models needed for robust and accurate wildfire severity predictions. The study utilizes six ML models—Random Forests (RF), Support Vector Machines (SVM), k-Nearest Neighbors (KNN), Linear Regression (LR), Adaptive Boosting (AB), and Multilayer Perceptron (MLP)—to construct an SEML. To quantify wildfire impacts, we extracted 118 spectral indices from post-fire Landsat-8 data and incorporated four additional predictors (land cover, elevation, slope, and aspect). A dataset of 911 CBI observations from 18 wildfire events was used for training, and models were validated through cross-validation and bootstrapping to ensure robustness. To address multicollinearity and reduce computational complexity, we applied Linear Discriminant Analysis (LDA) and condensed the dataset into three primary components. Our results indicated that simpler models, notably LR and KNN, performed well as meta-learners, with LR achieving the highest predictive accuracy. Moreover, using only two base learners (RF and SVM) was sufficient to realize optimal SEML performance, with an overall accuracy and precision of 0.661, recall of 0.662, and F1-score of 0.656. These findings demonstrate that SEML can enhance wildfire severity mapping by improving prediction accuracy and supporting more informed resource allocation and management decisions. Future research should explore additional meta-learning approaches and incorporate emerging remote sensing data sources such as hyperspectral and LiDAR. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial distribution of CBI data across 18 wildfire events in the United States, with each colored marker representing a specific wildfire.</p>
Full article ">Figure 2
<p>Overview of the SEML framework and the six scenarios used to evaluate different combinations of base-learners and meta-learners. Panel (<b>a</b>) illustrates the SEML workflow, where input data is first transformed using LDA before being passed to the base-learners for initial predictions. The outputs from the base-learners are then fed into the meta-learners, which combine these predictions to generate the final model prediction. Panel (<b>b</b>) outlines six scenarios, where the base-learners remain constant while the meta-learner changes in each scenario.</p>
Full article ">Figure 3
<p>Comparison of base-learner combinations for meta-learning (LR) in terms of four performance metrics: (<b>a</b>) Overall Accuracy, (<b>b</b>) Precision, (<b>c</b>) Recall, and (<b>d</b>) F1-score. The y-axis in each subplot lists various base-learner combinations, while the x-axis shows the metric values. Error bars represent variability in performance across different simulations.</p>
Full article ">Figure 4
<p>Burn severity map illustrating the spatial distribution of the Carlton Complex wildfire.</p>
Full article ">
18 pages, 3004 KiB  
Article
Forestry Segmentation Using Depth Information: A Method for Cost Saving, Preservation, and Accuracy
by Krzysztof Wołk, Jacek Niklewski, Marek S. Tatara, Michał Kopczyński and Oleg Żero
Forests 2025, 16(3), 431; https://doi.org/10.3390/f16030431 - 27 Feb 2025
Viewed by 224
Abstract
Forests are critical ecosystems, supporting biodiversity, economic resources, and climate regulation. The traditional techniques applied in forestry segmentation based on RGB photos struggle in challenging circumstances, such as fluctuating lighting, occlusions, and densely overlapping structures, which results in imprecise tree detection and categorization. [...] Read more.
Forests are critical ecosystems, supporting biodiversity, economic resources, and climate regulation. The traditional techniques applied in forestry segmentation based on RGB photos struggle in challenging circumstances, such as fluctuating lighting, occlusions, and densely overlapping structures, which results in imprecise tree detection and categorization. Despite their effectiveness, semantic segmentation models have trouble recognizing trees apart from background objects in cluttered surroundings. In order to overcome these restrictions, this study advances forestry management by integrating depth information into the YOLOv8 segmentation model using the FinnForest dataset. Results show significant improvements in detection accuracy, particularly for spruce trees, where mAP50 increased from 0.778 to 0.848 and mAP50-95 from 0.472 to 0.523. These findings demonstrate the potential of depth-enhanced models to overcome the limitations of traditional RGB-based segmentation, particularly in complex forest environments with overlapping structures. Depth-enhanced semantic segmentation enables precise mapping of tree species, health, and spatial arrangements, critical for habitat analysis, wildfire risk assessment, and sustainable resource management. By addressing the challenges of size, distance, and lighting variations, this approach supports accurate forest monitoring, improved resource conservation, and automated decision-making in forestry. This research highlights the transformative potential of depth integration in segmentation models, laying a foundation for broader applications in forestry and environmental conservation. Future studies could expand dataset diversity, explore alternative depth technologies like LiDAR, and benchmark against other architectures to enhance performance and adaptability further. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Example of RGB images and corresponding semantic, instance, and panoptic segmentation from the FinnForest dataset.</p>
Full article ">Figure 2
<p>Solution diagram.</p>
Full article ">Figure 3
<p>Examples of the RGB images vs. the corresponding relative depth images based on Depth Anything V2.</p>
Full article ">Figure 4
<p>Spruce and Ground segmentation with no depth information.</p>
Full article ">Figure 5
<p>Spruce and Ground segmentation with depth information.</p>
Full article ">Figure 6
<p>Model Performance Comparison for Spruce vs. Ground Segmentation.</p>
Full article ">
19 pages, 21047 KiB  
Article
Real-Time Localization for an AMR Based on RTAB-MAP
by Chih-Jer Lin, Chao-Chung Peng and Si-Ying Lu
Actuators 2025, 14(3), 117; https://doi.org/10.3390/act14030117 - 27 Feb 2025
Viewed by 223
Abstract
This study aimed to develop a real-time localization system for an AMR (autonomous mobile robot), which utilizes the Robot Operating System (ROS) Noetic version in the Ubuntu 20.04 operating system. RTAB-MAP (Real-Time Appearance-Based Mapping) is employed for localization, integrating with an RGB-D camera [...] Read more.
This study aimed to develop a real-time localization system for an AMR (autonomous mobile robot), which utilizes the Robot Operating System (ROS) Noetic version in the Ubuntu 20.04 operating system. RTAB-MAP (Real-Time Appearance-Based Mapping) is employed for localization, integrating with an RGB-D camera and a 2D LiDAR for real-time localization and mapping. The navigation was performed using the A* algorithm for global path planning, combined with the Dynamic Window Approach (DWA) for local path planning. It enables the AMR to receive velocity control commands and complete the navigation task. RTAB-MAP is a graph-based visual SLAM method that combines closed-loop detection and the graph optimization algorithm. The maps built using these three methods were evaluated with RTAB-MAP localization and AMCL (Adaptive Monte Carlo Localization) in a high-similarity long corridor environment. For RTAB-MAP and AMCL methods, three map optimization methods, i.e., TORO (Tree-based Network Optimizer), g2o (General Graph Optimization), and GTSAM (Georgia Tech Smoothing and Mapping), were used for the graph optimization of the RTAB-MAP and AMCL methods. Finally, the TORO, g2o, and GTSAM methods were compared to test the accuracy of localization for a long corridor according to the RGB-D camera and the 2D LiDAR. Full article
(This article belongs to the Special Issue Actuators in Robotic Control—3rd Edition)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the AMR for this experiment.</p>
Full article ">Figure 2
<p>(<b>a</b>) Architecture of RTAB-MAP. (<b>b</b>) Flowchart of the RTAB-MAP method.</p>
Full article ">Figure 3
<p>(<b>a</b>) Experimental location (AMR moves from A,B,C,D,E, to F); (<b>b</b>) graph optimization setup for RTAB-MAP with TORO.</p>
Full article ">Figure 4
<p>(<b>a</b>) Loop closure detection for time <span class="html-italic">t</span> = 00:10. (<b>b</b>) Loop closure detection for time <span class="html-italic">t</span> = 01:02. (<b>c</b>) Loop closure detection for initial time <span class="html-italic">t</span> = 02:07. (<b>d</b>) Loop closure detection for time <span class="html-italic">t</span> = 03:21. (<b>e</b>) Loop closure detection for time <span class="html-italic">t</span> = 04:24.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Loop closure detection for time <span class="html-italic">t</span> = 00:10. (<b>b</b>) Loop closure detection for time <span class="html-italic">t</span> = 01:02. (<b>c</b>) Loop closure detection for initial time <span class="html-italic">t</span> = 02:07. (<b>d</b>) Loop closure detection for time <span class="html-italic">t</span> = 03:21. (<b>e</b>) Loop closure detection for time <span class="html-italic">t</span> = 04:24.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Loop closure detection for time <span class="html-italic">t</span> = 00:10. (<b>b</b>) Loop closure detection for time <span class="html-italic">t</span> = 01:02. (<b>c</b>) Loop closure detection for initial time <span class="html-italic">t</span> = 02:07. (<b>d</b>) Loop closure detection for time <span class="html-italic">t</span> = 03:21. (<b>e</b>) Loop closure detection for time <span class="html-italic">t</span> = 04:24.</p>
Full article ">Figure 5
<p>Localization graph for RTAB-MAP with TORO.</p>
Full article ">Figure 6
<p>Localization graph for RTAB-MAP with g2o.</p>
Full article ">Figure 7
<p>Localization graph for RTAB-MAP with GTSAM.</p>
Full article ">Figure 8
<p>Proposed TF tree in ROS.</p>
Full article ">Figure 9
<p>Move_base node [<a href="#B41-actuators-14-00117" class="html-bibr">41</a>].</p>
Full article ">Figure 10
<p>Recovery behaviors of the move_base node [<a href="#B42-actuators-14-00117" class="html-bibr">42</a>].</p>
Full article ">Figure 11
<p>(<b>a</b>) Obstacle avoidance and loop closure detection; (<b>b</b>) beginning of the task; (<b>c</b>) destination of the obstacle avoidance task.</p>
Full article ">Figure 12
<p>Navigation results for AMCL with TORO.</p>
Full article ">Figure 13
<p>Navigation results for RTAB-MAP with TORO.</p>
Full article ">Figure 14
<p>Navigation photos for the proposed RTAB-MAP with TORO.</p>
Full article ">Figure 15
<p>(<b>a</b>) Obstacle avoidance trajectories of TORO for RTAB-MAP. (<b>b</b>) Obstacle avoidance trajectories of g2o for RTAB-MAP. (<b>c</b>) Obstacle avoidance trajectories of GTSAM for RTAB-MAP.</p>
Full article ">
24 pages, 8561 KiB  
Review
A Review of Research on SLAM Technology Based on the Fusion of LiDAR and Vision
by Peng Chen, Xinyu Zhao, Lina Zeng, Luxinyu Liu, Shengjie Liu, Li Sun, Zaijin Li, Hao Chen, Guojun Liu, Zhongliang Qiao, Yi Qu, Dongxin Xu, Lianhe Li and Lin Li
Sensors 2025, 25(5), 1447; https://doi.org/10.3390/s25051447 - 27 Feb 2025
Viewed by 296
Abstract
In recent years, simultaneous localization and mapping with the fusion of LiDAR and vision fusion has gained extensive attention in the field of autonomous navigation and environment sensing. However, its limitations in feature-scarce (low-texture, repetitive structure) environmental scenarios and dynamic environments have prompted [...] Read more.
In recent years, simultaneous localization and mapping with the fusion of LiDAR and vision fusion has gained extensive attention in the field of autonomous navigation and environment sensing. However, its limitations in feature-scarce (low-texture, repetitive structure) environmental scenarios and dynamic environments have prompted researchers to investigate the use of combining LiDAR with other sensors, particularly the effective fusion with vision sensors. This technique has proven to be highly effective in handling a variety of situations by fusing deep learning with adaptive algorithms. LiDAR excels in complex environments, with its ability to acquire high-precision 3D spatial information, especially when dealing with complex and dynamic environments with high reliability. This paper analyzes the research status, including the main research results and findings, of the early single-sensor SLAM technology and the current stage of LiDAR and vision fusion SLAM. Specific solutions for current problems (complexity of data fusion, computational burden and real-time performance, multi-scenario data processing, etc.) are examined by categorizing and summarizing the body of the extant literature and, at the same time, discussing the trends and limitations of the current research by categorizing and summarizing the existing literature, as well as looks forward to the future research directions, including multi-sensor fusion, optimization of algorithms, improvement of real-time performance, and expansion of application scenarios. This review aims to provide guidelines and insights for the development of SLAM technology for LiDAR and vision fusion, with a view to providing a reference for further SLAM technology research. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Overview of the SLAM research progress.</p>
Full article ">Figure 2
<p>The three primary steps of framework for traditional SLAM architecture: (<b>a</b>) the eigenvalue determination for estimating the global stage; (<b>b</b>) Original data processing stage; (<b>c</b>) Global map creation and data inconsistencies stage.</p>
Full article ">Figure 3
<p>Diagram of the flow of multimodal data fusion technology.</p>
Full article ">Figure 4
<p>(<b>a</b>) The PVL-Cartographer system’s misalignment is depicted without closed-loop detection; (<b>b</b>) the outcomes following closed-loop detection. In (<b>a</b>,<b>b</b>), the region where loop closure detection is carried out is enclosed by the white rectangle. In order to merge LiDAR point clouds with panoramic photos, the system combines tilt-mounted LiDAR, panoramic cameras, and IMU sensors. It then uses internal data and algorithms to calculate the actual scale of the environment without the need for extra positional information. Even in surroundings with limited features, it may function effectively and dependably by achieving the seamless integration of data from many sensors, boosting the positioning and mapping results’ accuracy and dependability [<a href="#B30-sensors-25-01447" class="html-bibr">30</a>].</p>
Full article ">Figure 5
<p>Comparison of the Lou et al. scheme and the quality of 3D reconstruction using DynaSLAM techniques: (<b>a</b>) reconstruction of DynaSLAM, (<b>b</b>) reconstruction of Lou et al.’s method [<a href="#B32-sensors-25-01447" class="html-bibr">32</a>].</p>
Full article ">Figure 6
<p>The entire LSD-SLAM algorithm system framework diagram [<a href="#B38-sensors-25-01447" class="html-bibr">38</a>].</p>
Full article ">Figure 7
<p>The maps produced by the original ORB-SLAM system. (<b>a</b>) System-generated map for ORB-SLAM-front view; (<b>b</b>) System-generated map for ORB-SLAM-vertical view; (<b>c</b>) ORB-SLAM with Sun et al.’s approach-front view; (<b>d</b>) ORB-SLAM with Sun et al.’s approach-vertical view [<a href="#B60-sensors-25-01447" class="html-bibr">60</a>].</p>
Full article ">Figure 8
<p>The suggested environment perception system based on LiDAR and vision. (<b>a</b>) The map in top perspective; (<b>b</b>) A navigational two-dimensional grid map; (<b>c</b>) Side view of the map; (<b>d</b>) Three-dimensional point cloud map; SLAM technology illustration using multi-sensor fusion.</p>
Full article ">
21 pages, 20898 KiB  
Article
Combining UAV and Sentinel Satellite Data to Delineate Ecotones at Multiscale
by Yuxin Ma, Zhangjian Xie, Xiaolin She, Hans J. De Boeck, Weihong Liu, Chaoying Yang, Ninglv Li, Bin Wang, Wenjun Liu and Zhiming Zhang
Forests 2025, 16(3), 422; https://doi.org/10.3390/f16030422 - 26 Feb 2025
Viewed by 287
Abstract
Ecotones, i.e., transition zones between habitats, are important landscape features, yet they are often ignored in landscape monitoring. This study addresses the challenge of delineating ecotones at multiple scales by integrating multisource remote sensing data, including ultra-high-resolution RGB images, LiDAR data from UAVs, [...] Read more.
Ecotones, i.e., transition zones between habitats, are important landscape features, yet they are often ignored in landscape monitoring. This study addresses the challenge of delineating ecotones at multiple scales by integrating multisource remote sensing data, including ultra-high-resolution RGB images, LiDAR data from UAVs, and satellite data. We first developed a fine-resolution landcover map of three plots in Yunnan, China, with accurate delineation of ecotones using orthoimages and canopy height data derived from UAV-LiDAR. These maps were subsequently used as the training set for four machine learning models, from which the most effective model was selected as an upscaling model. The satellite data, encompassing Synthetic Aperture Radar (SAR; Sentinel-1), multispectral imagery (Sentinel-2), and topographic data, functioned as explanatory variables. The Random Forest model performed the best among the four models (kappa coefficient = 0.78), with the red band, shortwave infrared band, and vegetation red edge band as the most significant spectral variables. Using this RF model, we compared landscape patterns between 2017 and 2023 to test the model’s ability to quantify ecotone dynamics. We found an increase in ecotone over this period that can be attributed to an expansion of 0.287 km2 (1.1%). In sum, this study demonstrates the effectiveness of combining UAV and satellite data for precise, large-scale ecotone detection. This can enhance our understanding of the dynamic relationship between ecological processes and landscape pattern evolution. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Workflow of the proposed upscaling model for ecotone detection at landscape scale. KNN, RF, SVM, and GBDT represent four machine learning algorithms: k-nearest-neighbors, random forest, support vector machine, and gradient boosting decision tree.</p>
Full article ">Figure 2
<p>The location of the study area and the distribution of three sample sites.</p>
Full article ">Figure 3
<p>Shrinking process diagram. H<sub>ij</sub> represents the CHM of any pixel, H<sub>f</sub> represents the average height of the forest, and H<sub>g</sub> represents the average height of the grassland. P<sub>gij</sub> represents the proportion of grass pixels within a moving window. P<sub>fij</sub> represents the proportion of forest pixels within a moving window.</p>
Full article ">Figure 4
<p>The digital terrain model (DTM), digital surface model (DSM), and canopy height model (CHM) of the three sample sites used in our study (for details, see <a href="#forests-16-00422-f002" class="html-fig">Figure 2</a>). (<b>a</b>–<b>c</b>) show the BJB site, (<b>d</b>–<b>f</b>) show the DJB site, and (<b>g</b>–<b>i</b>) show the STZ site.</p>
Full article ">Figure 5
<p>The orthoimage of three sample plots (<b>a</b>–<b>c</b>), fine-scale landscape pattern map derived from UAV-data (<b>d</b>–<b>f</b>), and landscape pattern maps including the delineated ecotones (shadows removed) (<b>g</b>–<b>i</b>).</p>
Full article ">Figure 6
<p>Comparison of the classification using only satellite data and the upscaling model. The orthoimage of three sample plots (<b>a</b>–<b>c</b>), landscape pattern map derived from the upscaling model (<b>d</b>–<b>f</b>), Sentinel-2 imagery of three sample plots (<b>g</b>–<b>i</b>), and landscape pattern map derived from only satellite data (<b>j</b>–<b>l</b>).</p>
Full article ">Figure 7
<p>The mean decrease accuracy and ranking of the 29 variables for the ecotone upscaling model. Pink bars indicate the four most important variables in the model.</p>
Full article ">Figure 8
<p>The landscape pattern maps of ecotones in 2017 and 2023.</p>
Full article ">
25 pages, 6071 KiB  
Article
A Multi-Scale Spatio-Temporal Fusion Network for Occluded Small Object Detection in Geiger-Mode Avalanche Photodiode LiDAR Systems
by Yuanxue Ding, Dakuan Du, Jianfeng Sun, Le Ma, Xianhui Yang, Rui He, Jie Lu and Yanchen Qu
Remote Sens. 2025, 17(5), 764; https://doi.org/10.3390/rs17050764 - 22 Feb 2025
Viewed by 304
Abstract
The Geiger-Mode Avalanche Photodiode (Gm-APD) LiDAR system demonstrates high-precision detection capabilities over long distances. However, the detection of occluded small objects at long distances poses significant challenges, limiting its practical application. To address this issue, we propose a multi-scale spatio-temporal object detection network [...] Read more.
The Geiger-Mode Avalanche Photodiode (Gm-APD) LiDAR system demonstrates high-precision detection capabilities over long distances. However, the detection of occluded small objects at long distances poses significant challenges, limiting its practical application. To address this issue, we propose a multi-scale spatio-temporal object detection network (MSTOD-Net), designed to associate object information across different spatio-temporal scales for the effective detection of occluded small objects. Specifically, in the encoding stage, a dual-channel feature fusion framework is employed to process range and intensity images from consecutive time frames, facilitating the detection of occluded objects. Considering the significant differences between range and intensity images, a multi-scale context-aware (MSCA) module and a feature fusion (FF) module are incorporated to enable efficient cross-scale feature interaction and enhance small object detection. Additionally, an edge perception (EDGP) module is integrated into the network’s shallow layers to refine the edge details and enhance the information in unoccluded regions. In the decoding stage, feature maps from the encoder are upsampled and combined with multi-level fused features, and four prediction heads are employed to decode the object categories, confidence, widths and heights, and displacement offsets. The experimental results demonstrate that the MSTOD-Net achieves mAP50 and mAR50 scores of 96.4% and 96.9%, respectively, outperforming the state-of-the-art methods. Full article
Show Figures

Figure 1

Figure 1
<p>Overall architecture of MSTOD-Net.</p>
Full article ">Figure 2
<p>EDGP module.</p>
Full article ">Figure 3
<p>FF module.</p>
Full article ">Figure 4
<p>MSCA module.</p>
Full article ">Figure 5
<p>Data acquisition system.</p>
Full article ">Figure 6
<p>Detection results for different methods in scenario 1. (<b>a</b>) Faster-RCNN; (<b>b</b>) SSD; (<b>c</b>) RetinaNet; (<b>d</b>) YOLOv3; (<b>e</b>) YOLOv5; (<b>f</b>) CenterNet; (<b>g</b>) Focs; (<b>h</b>) YOLOX; (<b>i</b>) YOLOv8; (<b>j</b>) ours.</p>
Full article ">Figure 6 Cont.
<p>Detection results for different methods in scenario 1. (<b>a</b>) Faster-RCNN; (<b>b</b>) SSD; (<b>c</b>) RetinaNet; (<b>d</b>) YOLOv3; (<b>e</b>) YOLOv5; (<b>f</b>) CenterNet; (<b>g</b>) Focs; (<b>h</b>) YOLOX; (<b>i</b>) YOLOv8; (<b>j</b>) ours.</p>
Full article ">Figure 7
<p>Detection results of different methods in scenario 2. (<b>a</b>) Faster-RCNN; (<b>b</b>) SSD; (<b>c</b>) RetinaNet; (<b>d</b>) YOLOv3; (<b>e</b>) YOLOv5; (<b>f</b>) CenterNet; (<b>g</b>) Focs; (<b>h</b>) YOLOX; (<b>i</b>) YOLOv8; (<b>j</b>) ours.</p>
Full article ">Figure 7 Cont.
<p>Detection results of different methods in scenario 2. (<b>a</b>) Faster-RCNN; (<b>b</b>) SSD; (<b>c</b>) RetinaNet; (<b>d</b>) YOLOv3; (<b>e</b>) YOLOv5; (<b>f</b>) CenterNet; (<b>g</b>) Focs; (<b>h</b>) YOLOX; (<b>i</b>) YOLOv8; (<b>j</b>) ours.</p>
Full article ">Figure 8
<p>Detection results and corresponding heat maps. (<b>a</b>) Scenario 1; (<b>b</b>) scenario 2.</p>
Full article ">Figure 9
<p>Detection results under different occlusion ratios using different methods. (<b>a</b>) Faster-RCNN; (<b>b</b>) SSD; (<b>c</b>) RetinaNet; (<b>d</b>) YOLOv3; (<b>e</b>) YOLOv5; (<b>f</b>) CenterNet; (<b>g</b>) Focs; (<b>h</b>) YOLOX; (<b>i</b>) YOLOv8; (<b>j</b>) ours.</p>
Full article ">Figure 9 Cont.
<p>Detection results under different occlusion ratios using different methods. (<b>a</b>) Faster-RCNN; (<b>b</b>) SSD; (<b>c</b>) RetinaNet; (<b>d</b>) YOLOv3; (<b>e</b>) YOLOv5; (<b>f</b>) CenterNet; (<b>g</b>) Focs; (<b>h</b>) YOLOX; (<b>i</b>) YOLOv8; (<b>j</b>) ours.</p>
Full article ">Figure 9 Cont.
<p>Detection results under different occlusion ratios using different methods. (<b>a</b>) Faster-RCNN; (<b>b</b>) SSD; (<b>c</b>) RetinaNet; (<b>d</b>) YOLOv3; (<b>e</b>) YOLOv5; (<b>f</b>) CenterNet; (<b>g</b>) Focs; (<b>h</b>) YOLOX; (<b>i</b>) YOLOv8; (<b>j</b>) ours.</p>
Full article ">Figure 10
<p>Detection results with different <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> values.</p>
Full article ">
33 pages, 31157 KiB  
Article
A Mobile LiDAR-Based Deep Learning Approach for Real-Time 3D Body Measurement
by Yongho Jeong, Taeuk Noh, Yonghak Lee, Seonjae Lee, Kwangil Choi, Sujin Jeong and Sunghwan Kim
Appl. Sci. 2025, 15(4), 2001; https://doi.org/10.3390/app15042001 - 14 Feb 2025
Viewed by 395
Abstract
In this study, we propose a solution for automatically measuring body circumferences by utilizing the built-in LiDAR sensor in mobile devices. Traditional body measurement methods mainly rely on 2D images or manual measurements. This research, however, utilizes 3D depth information to enhance both [...] Read more.
In this study, we propose a solution for automatically measuring body circumferences by utilizing the built-in LiDAR sensor in mobile devices. Traditional body measurement methods mainly rely on 2D images or manual measurements. This research, however, utilizes 3D depth information to enhance both accuracy and efficiency. By employing HRNet-based keypoint detection and transfer learning through deep learning, the precise locations of body parts are identified and combined with depth maps to automatically calculate body circumferences. Experimental results demonstrate that the proposed method exhibits a relative error of up to 8% for major body parts such as waist, chest, hip, and buttock circumferences, with waist and buttock measurements recording low error rates below 4%. Although some models showed error rates of 7.8% and 7.4% in hip circumference measurements, this was attributed to the complexity of 3D structures and the challenges in selecting keypoint locations. Additionally, the use of depth map-based keypoint correction and regression analysis significantly improved accuracy compared to conventional 2D-based measurement methods. The real-time processing speed was also excellent, ensuring stable performance across various body types. Full article
Show Figures

Figure 1

Figure 1
<p>An example of human pose estimation using 2D images.</p>
Full article ">Figure 2
<p>Flowchart of 3D Measurement Methodology.</p>
Full article ">Figure 3
<p>Circumference estimation using keypoints <math display="inline"><semantics> <msub> <mi>k</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>k</mi> <mn>2</mn> </msub> </semantics></math> detected from the camera’s perspective.</p>
Full article ">Figure 4
<p>This shows the geometric relationship between 2D and 3D keypoints for the top view of an object in circumference estimation.</p>
Full article ">Figure 5
<p>The (<b>a</b>) shows the mannequin’s RGB photo, (<b>b</b>) is depth map, (<b>c</b>) is edge detection using the Canny algorithm, and (<b>d</b>) is keypoint detection in the depth map photo.</p>
Full article ">Figure 6
<p>The (<b>a</b>) shows Cylinder’s RGB photo, (<b>b</b>) is depth map, (<b>c</b>) is edge detection using the Canny algorithm, and (<b>d</b>) is keypoint detection in the depth map photo.</p>
Full article ">Figure 7
<p>The graphs (<b>a</b>) is display the cylinder’s uncorrected keypoints, (<b>b</b>) is corrected keypoints, as well as (<b>c</b>) is the mannequin chest’s uncorrected keypoints, (<b>d</b>) is corrected keypoints. In the corrected graphs, the y-axis automatically adjusts its maximum value as the pixel distance decreases.</p>
Full article ">Figure 8
<p>The (<b>a</b>) shows the model’s RGB capture, (<b>b</b>) is depth map conversion, and (<b>c</b>) is Canny edge detection.</p>
Full article ">Figure 9
<p>Keypoints were placed at both endpoints of the waist, chest, hips, and buttocks in the front and side views to measure the length of each body part. The (<b>a</b>) shows the Model 1, (<b>b</b>) shows the Model 2. The Model 3 and 4 described in <a href="#app1-applsci-15-02001" class="html-app">Appendix A</a>.</p>
Full article ">Figure 10
<p>Graphs presenting the data before keypoint correction, where the y-axis represents the distance between the subject’s pixels and the background. Graph (<b>a</b>) illustrates Model 1, while (<b>b</b>) illustrates Model 2. The pixel distances for other parts of Model 1 and 2, as well as for Model 3 and 4, are detailed in the <a href="#app1-applsci-15-02001" class="html-app">Appendix A</a>.</p>
Full article ">Figure A1
<p>These figures shows the model’s RGB capture of Model 2.</p>
Full article ">Figure A2
<p>These figures shows the model’s RGB capture of Model 3.</p>
Full article ">Figure A3
<p>These figures shows the model’s RGB capture of Model 4.</p>
Full article ">Figure A4
<p>The figure shows the model’s depth map conversion of Model 2.</p>
Full article ">Figure A5
<p>The figure shows the model’s depth map conversion of Model 3.</p>
Full article ">Figure A6
<p>The figure shows the model’s depth map conversion of Model 4.</p>
Full article ">Figure A7
<p>The figure shows the model 2 Canny edge detection.</p>
Full article ">Figure A8
<p>The figure shows the model 3 Canny edge detection.</p>
Full article ">Figure A9
<p>The figure shows the model 4 Canny edge detection.</p>
Full article ">Figure A10
<p>Keypoints were positioned at both ends of the waist, chest, hips, and buttocks in the front and side views of Model 1 to measure each body part’s length.</p>
Full article ">Figure A11
<p>Keypoints were positioned at both ends of the waist, chest, hips, and buttocks in the front and side views of Model 2 to measure each body part’s length.</p>
Full article ">Figure A12
<p>Keypoints were positioned at both ends of the waist, chest, hips, and buttocks in the front and side views of Model 3 to measure each body part’s length.</p>
Full article ">Figure A13
<p>Keypoints were positioned at both ends of the waist, chest, hips, and buttocks in the front and side views of Model 4 to measure each body part’s length.</p>
Full article ">Figure A14
<p>Graphs showing the data before keypoint correction of Model 1, with the y-axis representing the distance between the subject’s pixels and the background.</p>
Full article ">Figure A15
<p>Graphs showing the data before keypoint correction of Model 2, with the y-axis representing the distance between the subject’s pixels and the background.</p>
Full article ">Figure A16
<p>Graphs showing the data before keypoint correction of Model 3, with the y-axis representing the distance between the subject’s pixels and the background.</p>
Full article ">Figure A17
<p>Graphs showing the data before keypoint correction of Model 4, with the y-axis representing the distance between the subject’s pixels and the background.</p>
Full article ">Figure A18
<p>Graphs showing the data after keypoint correction of Model 1. The y-axis automatically adjusts its maximum value as the distance between pixels decreases.</p>
Full article ">Figure A19
<p>Graphs showing the data after keypoint correction of Model 2. The y-axis automatically adjusts its maximum value as the distance between pixels decreases.</p>
Full article ">Figure A20
<p>Graphs showing the data after keypoint correction of Model 3. The y-axis automatically adjusts its maximum value as the distance between pixels decreases.</p>
Full article ">Figure A21
<p>Graphs showing the data after keypoint correction of Model 4. The y-axis automatically adjusts its maximum value as the distance between pixels decreases.</p>
Full article ">
30 pages, 8823 KiB  
Article
General Approach for Forest Woody Debris Detection in Multi-Platform LiDAR Data
by Renato César dos Santos, Sang-Yeop Shin, Raja Manish, Tian Zhou, Songlin Fei and Ayman Habib
Remote Sens. 2025, 17(4), 651; https://doi.org/10.3390/rs17040651 - 14 Feb 2025
Viewed by 372
Abstract
Woody debris (WD) is an important element in forest ecosystems. It provides critical habitats for plants, animals, and insects. It is also a source of fuel contributing to fire propagation and sometimes leads to catastrophic wildfires. WD inventory is usually conducted through field [...] Read more.
Woody debris (WD) is an important element in forest ecosystems. It provides critical habitats for plants, animals, and insects. It is also a source of fuel contributing to fire propagation and sometimes leads to catastrophic wildfires. WD inventory is usually conducted through field surveys using transects and sample plots. Light Detection and Ranging (LiDAR) point clouds are emerging as a valuable source for the development of comprehensive WD detection strategies. Results from previous LiDAR-based WD detection approaches are promising. However, there is no general strategy for handling point clouds acquired by different platforms with varying characteristics such as the pulse repetition rate and sensor-to-object distance in natural forests. This research proposes a general and adaptive morphological WD detection strategy that requires only a few intuitive thresholds, making it suitable for multi-platform LiDAR datasets in both plantation and natural forests. The conceptual basis of the strategy is that WD LiDAR points exhibit non-planar characteristics and a distinct intensity and comprise clusters that exceed a minimum size. The developed strategy was tested using leaf-off point clouds acquired by Geiger-mode airborne, uncrewed aerial vehicle (UAV), and backpack LiDAR systems. The results show that using the intensity data did not provide a noticeable improvement in the WD detection results. Quantitatively, the approach achieved an average recall of 0.83, indicating a low rate of omission errors. Datasets with a higher point density (i.e., from UAV and backpack LiDAR) showed better performance. As for the precision evaluation metric, it ranged from 0.40 to 0.85. The precision depends on commission errors introduced by bushes and undergrowth. Full article
Show Figures

Figure 1

Figure 1
<p>Data acquisition systems used in this study: (<b>a</b>) Geiger-mode high-altitude airborne, (<b>b</b>) UAV, and (<b>c</b>) backpack LiDAR systems.</p>
Full article ">Figure 2
<p>Location of forest areas and spatial distribution of validation regions.</p>
Full article ">Figure 3
<p>Sample close-up views of acquired datasets at different sites—perspective views colored by height and intensity (left and middle columns) and point density maps/statistics (right column): (<b>a</b>) Geiger-mode, (<b>b</b>) UAV, and (<b>c</b>) backpack LiDAR systems.</p>
Full article ">Figure 4
<p>Proposed workflow for the WD detection strategy.</p>
Full article ">Figure 5
<p>Illustration of a sample point cloud from the McCormick Woods dataset: (<b>a</b>) original point cloud, (<b>b</b>) normalized height point cloud, and (<b>c</b>) isolated point cloud close to the forest floor.</p>
Full article ">Figure 6
<p>Illustration of a sample region in McCormick Woods acquired by Geiger-mode LiDAR: (<b>a</b>) normalized height point cloud and (<b>b</b>) corresponding planarity map.</p>
Full article ">Figure 7
<p>Illustration of the two-step classification strategy using planarity and intensity attributes for WD detection.</p>
Full article ">Figure 8
<p>Illustration of sample point cloud collected by a backpack system showing (<b>a</b>) original intensity and (<b>b</b>) normalized intensity.</p>
Full article ">Figure 9
<p>Illustration of intensity normalization: (<b>a</b>) procedure for intensity normalization and (<b>b</b>) intensity histogram before normalization and (<b>c</b>) after normalization.</p>
Full article ">Figure 10
<p>Illustration of derived confusion matrix before (<b>a</b>) and after (<b>b</b>) intensity normalization together with the precision, recall, and F1-score metrics.</p>
Full article ">Figure 11
<p>Illustration of the refinement of WD detection based on cluster spread: (<b>a</b>) DBSCAN segmentation of hypothesized WD, (<b>b</b>) maximum spread for selected clusters, shown by black arrows, and (<b>c</b>) WD detection result after eliminating small clusters (the Geiger_MCNF_2021 dataset is used for this illustration).</p>
Full article ">Figure 12
<p>Clipped view of the Geiger_MCNF_2021 dataset: (<b>a</b>) normalized height point cloud colored by height and (<b>b</b>) point cloud colored by original intensity; and WD detection results (randomly colored by cluster ID) (<b>c</b>) using planarity criterion, (<b>d</b>) using planarity and intensity criteria, and (<b>e</b>) reference data.</p>
Full article ">Figure 13
<p>Clipped view of the <span class="html-italic">UAV-MNF_Plot_4d_a-2021</span> dataset: (<b>a</b>) normalized height point cloud colored by height, (<b>b</b>) point cloud colored by original intensity, and (<b>c</b>) point cloud colored by normalized intensity; and WD detection results (randomly colored by cluster ID) (<b>d</b>) using planarity criterion, (<b>e</b>) using planarity and original intensity criteria, (<b>f</b>) using planarity and normalized intensity criteria, and (<b>g</b>) reference data.</p>
Full article ">Figure 14
<p>Clipped view of the BP_MNF_Plot_4d_b_2022 dataset: (<b>a</b>) normalized height point cloud colored by height, (<b>b</b>) point cloud colored by original intensity, and (<b>c</b>) point cloud colored by normalized intensity; and WD detection results (randomly colored by cluster ID) (<b>d</b>) using planarity criterion, (<b>e</b>) using planarity and original intensity criteria, (<b>f</b>) using planarity and normalized intensity criteria, and (<b>g</b>) reference data.</p>
Full article ">Figure 15
<p>Comparison of F1-scores from pixel-based and object-based analyses of Geiger-mode, UAV, and backpack LiDAR data using (<b>a</b>) planarity criterion alone and (<b>b</b>) planarity combined with intensity—the original intensity was only used for the Geiger-mode LiDAR, whereas, for the UAV and backpack LiDAR, intensity was normalized.</p>
Full article ">Figure 16
<p>Impact of point density on WD detection for aerial and terrestrial datasets showing clipped view of the point cloud colored by height (top) and WD detection results using planarity criterion (bottom): (<b>a</b>) Geiger_MNF_Plot_4d_a_2021, (<b>b</b>) UAV_MNF_Plot_4d_a_2021, (<b>c</b>) BP_MNF_Plot_4d_b_2022, and (<b>d</b>) reference data.</p>
Full article ">Figure 17
<p>Clipped view of the UAV_MNF_Plot_4d_b_2022 (top) and BP_MNF_Plot_4d_b_2022 (bottom) datasets: (<b>a</b>) normalized height point cloud colored by height, (<b>b</b>) WD detection results using planarity criterion, (<b>c</b>) zoom-in view of WD detection results, and (<b>d</b>) reference data.</p>
Full article ">
17 pages, 4402 KiB  
Article
Quality Evaluation for Colored Point Clouds Produced by Autonomous Vehicle Sensor Fusion Systems
by Colin Schaefer, Zeid Kootbally and Vinh Nguyen
Sensors 2025, 25(4), 1111; https://doi.org/10.3390/s25041111 - 12 Feb 2025
Viewed by 373
Abstract
Perception systems for autonomous vehicles (AVs) require various types of sensors, including light detection and ranging (LiDAR) and cameras, to ensure their robustness in driving scenarios and weather conditions. The data from these sensors are fused together to generate maps of the surrounding [...] Read more.
Perception systems for autonomous vehicles (AVs) require various types of sensors, including light detection and ranging (LiDAR) and cameras, to ensure their robustness in driving scenarios and weather conditions. The data from these sensors are fused together to generate maps of the surrounding environment and provide information for the detection and tracking of objects. Hence, evaluation methods are necessary to compare existing and future sensor systems through quantifiable measurements given the wide range of sensor models and design choices. This paper presents an evaluation method to compare colored point clouds, a common fused data type, among two LiDAR–camera fusion systems and a stereo camera setup. The evaluation approach uses a test artifact measured by the fusion system’s colored point cloud through the spread, area coverage, and color difference of the colored points within the computed space. The test results showed the evaluation approach was able to rank the sensor fusion systems based on its metrics and complement the experimental observations. The proposed evaluation methodology is, therefore, suitable towards the comparison of generated colored point clouds by sensor fusion systems. Full article
Show Figures

Figure 1

Figure 1
<p>The overview of the development of the colored point cloud evaluation method. The research approach starts with performing the tests to obtain raw data consisting of colored point clouds representations of the test artifact.</p>
Full article ">Figure 2
<p>A schematic showing a top view of the test setup for each artifact side with the sensor shown in grey (<b>left</b>). An example of a detectable test at 2 m with the artifact and sensor mount circled in yellow (<b>right</b>).</p>
Full article ">Figure 3
<p>Images of sample results visually describing the conventions followed when isolating the points of the test artifact within the interpolation fusion data.</p>
Full article ">Figure 4
<p>Results from a sample test showing an example of how a bounding box was used to isolate the test artifacts points in stereoscopic data for tests at distances greater than 3 m.</p>
Full article ">Figure 5
<p>Developing the splitting plane to classify the artifact points based on their location. Coordinates are in meters.</p>
Full article ">Figure 6
<p>The process of creating the reference planes for both sides of the test artifact points. Coordinates are in meters.</p>
Full article ">Figure 7
<p>A series of plots visually describing the sequence of determining the point coverage of a set of artifact points. For these images, the stereoscopic data of the left−undetectable side at 1 m test was used, with the projected artifact point on the left reference plane shown in black (<b>left</b>), forming the shape of the total point coverage shown in yellow (<b>middle</b>), and determining the point coverage of the test artifact’s expected area with the artifact’s bounding box in blue, the centroid normal in pink x, point coverage in green, and error in the point coverage in red (<b>right</b>).</p>
Full article ">Figure 8
<p>The raw point cloud of each testing configuration at the test distance of 1 m.</p>
Full article ">Figure 9
<p>The point coverage results of the test represented as the percentage of the artifact’s expected area the point cloud covers within its enclosed area.</p>
Full article ">Figure 10
<p>The error in point coverage for each test represented as the percentage of the total point coverage located outside the expected area region of the test artifact.</p>
Full article ">Figure 11
<p>The RMS value of the distance between the artifact points and their respective reference plane.</p>
Full article ">Figure 12
<p>The average LPD of the artifact point clouds.</p>
Full article ">Figure 13
<p>The average color difference values for each test.</p>
Full article ">Figure 14
<p>The PSNR image values for each test. Note that the PSNR could not be calculated for the undetectable side since the reference color was (0, 0, 0).</p>
Full article ">
35 pages, 25233 KiB  
Article
Assessment of the Solar Potential of Buildings Based on Photogrammetric Data
by Paulina Jaczewska, Hubert Sybilski and Marlena Tywonek
Energies 2025, 18(4), 868; https://doi.org/10.3390/en18040868 - 12 Feb 2025
Viewed by 610
Abstract
In recent years, a growing demand for alternative energy sources, including solar energy, has been observed. This article presents a methodology for assessing the solar potential of buildings using images from Unmanned Aerial Vehicles (UAVs) and point clouds from airborne LIDAR. The proposed [...] Read more.
In recent years, a growing demand for alternative energy sources, including solar energy, has been observed. This article presents a methodology for assessing the solar potential of buildings using images from Unmanned Aerial Vehicles (UAVs) and point clouds from airborne LIDAR. The proposed method includes the following stages: DSM generation, extraction of building footprints, determination of roof parameters, map solar energy generation, removing of the areas that are not suitable for the installation solar systems, calculation of power per each building, conversion of solar irradiance into energy, and mapping the potential for solar power generation. This paper describes also the Detecting Photovoltaic Panels algorithm with the use of deep learning techniques. The proposed algorithm enabled assessing the efficiency of photovoltaic panels and comparing the results of maps of the solar potential of buildings, as well as identifying the areas that require optimization. The results of the analysis, which had been conducted in the test areas in the village and on the campus of the university, confirmed the usefulness of the above proposed methods. The analysis provides that the UAV image data enable generation of solar potential maps with higher accuracy (MAE = 8.5 MWh) than LIDAR data (MAE = 10.5 MWh). Full article
(This article belongs to the Special Issue Advanced Applications of Solar and Thermal Storage Energy)
Show Figures

Figure 1

Figure 1
<p>Research areas: (<b>a</b>) Military University of Technology [Google Earth]; (<b>b</b>) Wodziczna village [own photo].</p>
Full article ">Figure 2
<p>Methodology of generating the map of the solar potential of buildings.</p>
Full article ">Figure 3
<p>The methodology of detecting objects with the use of deep learning.</p>
Full article ">Figure 4
<p>Location of the detected photovoltaic systems Photovoltaic systems marked with numbers 1–6 correspond to the houses marked in <a href="#energies-18-00868-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 5
<p>Power generation potential in Wodziczna village (low-altitude data). The houses marked with numbers 1–6 correspond to the photovoltaic systems marked in <a href="#energies-18-00868-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>Assessment of the solar potential of buildings for the campus of the Military University of Technology: (<b>a</b>) Solar map of the campus; (<b>b</b>) power generation potential of the campus.</p>
Full article ">Figure 6 Cont.
<p>Assessment of the solar potential of buildings for the campus of the Military University of Technology: (<b>a</b>) Solar map of the campus; (<b>b</b>) power generation potential of the campus.</p>
Full article ">Figure 7
<p>The 2.5D visualization of the potential for solar power generation. Fragment of the MUT campus, visualization based on DSM + orthomosaic (mesh size: 50 cm, resolution of the orthomosaic: 5 cm).</p>
Full article ">Figure 8
<p>Maps of power generation potential for three different datasets: (<b>a</b>) Power generation potential in Wodziczna village [DSM (mesh size 1.75 cm) based on imagery data]; (<b>b</b>) Power generation potential in Wodziczna village [DSM (mesh size 10 cm) based on LIDAR data]; (<b>c</b>) Power generation potential in Wodziczna village [DSM (mesh size 100 cm) based on LIDAR data].</p>
Full article ">Figure 9
<p>Comparison of grid size from different datasets: low-altitude imagery data and LIDAR data from ALS, at various mesh sizes.</p>
Full article ">Figure 10
<p>Position of houses on the map of the power generation potential [DSM (mesh size 1.75 cm), based on image data].</p>
Full article ">Figure 11
<p>The 2.5D visualization of the power generation potential: (<b>a</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 1.75 cm) + orthomosaic (pixel size 1.75 cm), DSM based on low-altitude photogrammetric data; (<b>b</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 10 cm) + orthomosaic (pixel size 1.75 cm), DSM based on LIDAR data; (<b>c</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 100 cm) + orthomosaic (pixel size 1.75 cm), DSM based on LIDAR data.</p>
Full article ">Figure 11 Cont.
<p>The 2.5D visualization of the power generation potential: (<b>a</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 1.75 cm) + orthomosaic (pixel size 1.75 cm), DSM based on low-altitude photogrammetric data; (<b>b</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 10 cm) + orthomosaic (pixel size 1.75 cm), DSM based on LIDAR data; (<b>c</b>) Fragment of Wodziczna village, visualization based on DSM (mesh size 100 cm) + orthomosaic (pixel size 1.75 cm), DSM based on LIDAR data.</p>
Full article ">Figure 12
<p>Shaded photovoltaic panels (mounting error)—System 3.</p>
Full article ">Figure 13
<p>Comparison of the power generation for roof surfaces with the data on power generation from existing photovoltaic systems.</p>
Full article ">Figure 14
<p>Power generation from photovoltaic systems—vectorized areas of photovoltaic panels. (<b>a</b>) vectorization of a photovoltaic system where the panels are separated; (<b>b</b>) vectorization of a photovoltaic system.</p>
Full article ">Figure 15
<p>Comparison of the power generation for vectorized surfaces of photovoltaic systems with the data on power generation from existing photovoltaic systems.</p>
Full article ">
Back to TopTop