[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (308)

Search Parameters:
Keywords = orthomosaic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 7790 KiB  
Article
Application of UAV-SfM Photogrammetry to Monitor Deformations of Coastal Defense Structures
by Santiago García-López, Mercedes Vélez-Nicolás, Verónica Ruiz-Ortiz, Pedro Zarandona-Palacio, Antonio Contreras-de-Villar, Francisco Contreras-de-Villar and Juan José Muñoz-Pérez
Remote Sens. 2025, 17(1), 71; https://doi.org/10.3390/rs17010071 - 28 Dec 2024
Viewed by 507
Abstract
Coastal defense has traditionally relied on hard infrastructures like breakwaters, dykes, and groins to protect harbors, settlements, and beaches from the impacts of longshore drift and storm waves. The prolonged exposure to wave erosion and dynamic loads of different nature can result in [...] Read more.
Coastal defense has traditionally relied on hard infrastructures like breakwaters, dykes, and groins to protect harbors, settlements, and beaches from the impacts of longshore drift and storm waves. The prolonged exposure to wave erosion and dynamic loads of different nature can result in damage, deformation, and eventual failure of these infrastructures, entailing severe economic and environmental losses. Periodic post-construction monitoring is crucial to identify shape changes, ensure the structure’s stability, and implement maintenance works as required. This paper evaluates the performance and quality of the restitution products obtained from the application of UAV photogrammetry to the longest breakwater in the province of Cádiz, southern Spain. The photogrammetric outputs, an orthomosaic and a Digital Surface Model (DSM), were validated with in situ RTK-GPS measurements, displaying excellent planimetric accuracy (RMSE 0.043 m and 0.023 m in X and Y, respectively) and adequate altimetric accuracy (0.100 m in Z). In addition, the average enveloping surface inferred from the DSM allowed quantification of the deformation of the breakwater and defining of the deformation mechanisms. UAV photogrammetry has proved to be a suitable and efficient technique to complement traditional monitoring surveys and to provide insights into the deformation mechanisms of coastal structures. Full article
(This article belongs to the Special Issue Coastal and Littoral Observation Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the study area with indication of the Puntilla breakwater (red rectangle), located at the mouth of the Guadalete river.</p>
Full article ">Figure 2
<p>(<b>a</b>) DJI Matrice 210 V2 quadcopter, (<b>b</b>) DJI Zenmuse X4S camera, (<b>c</b>) target used as ground control point, and (<b>d</b>) RTK-dGPS conformed by a Leica GS18 antenna and a Leica CS20 field controller.</p>
Full article ">Figure 3
<p>Image of one of the GCPs taken as a reference for its good visibility along the breakwater.</p>
Full article ">Figure 4
<p>Workflow followed to characterize the breakwater geometry and generate the model.</p>
Full article ">Figure 5
<p>Panoramic view of the breakwater taken on the day of data acquisition where both good weather and sea conditions are observed.</p>
Full article ">Figure 6
<p>Flight plan consisting of four path lines (in green) generated with the DJI Pilot software in order to cover the breakwater.</p>
Full article ">Figure 7
<p>Detail of the cartographic products at the tip of the breakwater (<b>a</b>) Orthomosaic, (<b>b</b>) DSM.</p>
Full article ">Figure 8
<p>Frequency distribution of the residuals in Z in the DSM.</p>
Full article ">Figure 9
<p>(<b>a</b>) DSM of the entire breakwater; (<b>b</b>) detail of the smoothed DSM (elevation range between 3.5 and 6.5 m) at the SW end of the breakwater with indication of the 12 crests identified.</p>
Full article ">Figure 10
<p>Evolution along the breakwater of the position and height of the crests and troughs identified on the outer façade of the Puntilla breakwater. The origin of distances is located on the first crest.</p>
Full article ">Figure 11
<p>Picture taken on 26 March 2024 during a westerly storm showing the breaking of 5 wave crests (indicated in the image) affecting the outer facade of the breakwater, in the 300 m-long section where the crest-trough morphologies can be identified.</p>
Full article ">
21 pages, 10310 KiB  
Article
Rapid Mapping: Unmanned Aerial Vehicles and Mobile-Based Remote Sensing for Flash Flood Consequence Monitoring (A Case Study of Tsarevo Municipality, South Bulgarian Black Sea Coast)
by Stelian Dimitrov, Bilyana Borisova, Ivo Ihtimanski, Kalina Radeva, Martin Iliev, Lidiya Semerdzhieva and Stefan Petrov
Urban Sci. 2024, 8(4), 255; https://doi.org/10.3390/urbansci8040255 - 16 Dec 2024
Viewed by 809
Abstract
This research seeks to develop and test a rapid mapping approach using unmanned aerial vehicles (UAVs) and terrestrial laser scanning to provide precise, high-resolution spatial data for urban areas right after disasters. This mapping aims to support efforts to protect the population and [...] Read more.
This research seeks to develop and test a rapid mapping approach using unmanned aerial vehicles (UAVs) and terrestrial laser scanning to provide precise, high-resolution spatial data for urban areas right after disasters. This mapping aims to support efforts to protect the population and infrastructure while analyzing the situation in affected areas. It focuses on flood-prone regions lacking modern hydrological data and where regular monitoring is absent. This study was conducted in resort villages and adjacent catchments in Bulgaria’s southern Black Sea coast with leading maritime tourism features, after a flash flood on 5 September 2023 caused human casualties and severe material damage. The resulting field data with a spatial resolution of 3 to 5 cm/px were used to trace the effects of the flood on topographic surface changes and structural disturbances. Flood simulation using UAV data and a digital elevation model was performed. The appropriateness of contemporary land use forms and infrastructure location in catchments is discussed. The role of spatial data in the analysis of genetic factors in risk assessment is commented on. The results confirm the applicability of rapid mapping in informing the activities of responders in a period of increased vulnerability following a flood. The results were used by Bulgaria’s Ministry of Environment and Water to analyze the situation shortly after the disaster. Full article
Show Figures

Figure 1

Figure 1
<p>The 24-h precipitation (24 h валеж) total for 6 September 2023 (mm) in Bulgaria. Credit: National Institute of Meteorology and Hydrology Bulgaria.</p>
Full article ">Figure 2
<p>Study area map.</p>
Full article ">Figure 3
<p>S.O.D.A Photogrammetry sensor (<b>a</b>) and EbeeX fixed-wing UAS (<b>b</b>).</p>
Full article ">Figure 4
<p>Microdrones MD LiDAR 1000 (<b>a</b>) and Velodyne PUCK VLP-16 (<b>b</b>).</p>
Full article ">Figure 5
<p>ZEB Horizon Geo SLAM (<b>a</b>) and its accessories (<b>b</b>).</p>
Full article ">Figure 6
<p>Sediment cones after the Cherna River mouth.</p>
Full article ">Figure 7
<p>Restoration of the floodplain at the lower end of the Cherna river.</p>
Full article ">Figure 8
<p>Change in the transverse profile of the bed of the Cherna river due to the flood.</p>
Full article ">Figure 9
<p>Visualization of the damage to the bridge on Lisevo Dere after the village of Izgrev (model from laser scanning).</p>
Full article ">Figure 10
<p>Bridge structure in the central part of the town of Tsarevo (model from laser scanning).</p>
Full article ">Figure 11
<p>The ruined bridge at the bottom of the Cherna River.</p>
Full article ">Figure 12
<p>Digital surface (1) and digital terrain (2) models.</p>
Full article ">Figure 13
<p>Areas with logging and lack of forest vegetation—terrain profile.</p>
Full article ">Figure 14
<p>Cross-section at the collapsed bridge: after the flood in September 2023 and today (June 2024).</p>
Full article ">
36 pages, 28452 KiB  
Article
Assessing Geometric and Radiometric Accuracy of DJI P4 MS Imagery Processed with Agisoft Metashape for Shrubland Mapping
by Tiago van der Worp da Silva, Luísa Gomes Pereira and Bruna R. F. Oliveira
Remote Sens. 2024, 16(24), 4633; https://doi.org/10.3390/rs16244633 - 11 Dec 2024
Viewed by 528
Abstract
The rise in inexpensive Unmanned Aerial Systems (UAS) and accessible processing software offers several advantages in forest ecosystem monitoring and management. The increase in usability of such tools can result in the simplification of workflows, potentially impacting the quality of the generated data. [...] Read more.
The rise in inexpensive Unmanned Aerial Systems (UAS) and accessible processing software offers several advantages in forest ecosystem monitoring and management. The increase in usability of such tools can result in the simplification of workflows, potentially impacting the quality of the generated data. This study offers insights into the precision and reliability of the DJI Phantom 4 Multispectral (P4MS) UAS for mapping shrublands using the Agisoft Metashape (AM) for image processing. Geometric accuracy was evaluated using ground control points (GCPs) and different configurations. The best configuration was then used to produce orthomosaics. Subsequently, the orthomosaics were transformed into reflectance orthomosaics using various radiometric correction methods. These methods were further assessed using reference panels. The method producing the most accurate reflectance values was then chosen to create the final reflectance and Normalised Difference Vegetation Index (NDVI) maps. Radiometric accuracy was assessed through a multi-step process. Initially, precision was measured by comparing reflectance orthomosaics and NDVI derived from images taken on consecutive days. Finally, reliability was evaluated by comparing the NDVI with NDVI from a reference camera, the MicaSense Altum AL0, produced with images acquired on the same days. The results demonstrate that the P4MS is both precise and reliable for shrubland mapping. Reflectance maps and NDVI generated in AM exhibit acceptable geometric and radiometric accuracy when geometric calibration is performed with at least one GCP and radiometric calibration utilises images of reflectance panels captured at flight height, without relying on incident light sensor (ILS) data. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study area location with distribution of the ground control points (GCPs) and verification points (VPs).</p>
Full article ">Figure 2
<p>(<b>a</b>) Study area vegetation, picture by B.R.F. Oliveira in February 2023 and (<b>b</b>) Digital terrain model of the surveyed area.</p>
Full article ">Figure 3
<p>Flight paths of the DJI P4 MS. The yellow lines represent the sun ray’s direction at 13 h and 14 h on 10 November 2022.</p>
Full article ">Figure 4
<p>Wind speed and direction during flight on the (<b>a</b>) 10 and (<b>b</b>) 11 November.</p>
Full article ">Figure 5
<p>Net radiation at 2 m height between 10 November 2022 00:00 UTC and 12 November 2022 00:00 UTC.</p>
Full article ">Figure 6
<p>Ground control/verification point.</p>
Full article ">Figure 7
<p>Reflectance panels: (<b>a</b>) MicaSense RP04-1918154-OB (size: 15 cm × 15 cm) and (<b>b</b>) high-end Labsphere Spectralon<sup>®</sup> diffuse reflectance panel (size: 50 cm × 50 cm).</p>
Full article ">Figure 8
<p>Distribution of radiometric validation control points along the study area.</p>
Full article ">Figure 9
<p>(<b>a1</b>) DJI Matrice 300 RTK UAS; (<b>b1</b>) DJI P4 Multispectral RTK UAS; (<b>a2</b>) incident light sensor highlighted in red from (<b>a1</b>); and (<b>b2</b>) incident light sensor highlighted in red from (<b>b1</b>).</p>
Full article ">Figure 9 Cont.
<p>(<b>a1</b>) DJI Matrice 300 RTK UAS; (<b>b1</b>) DJI P4 Multispectral RTK UAS; (<b>a2</b>) incident light sensor highlighted in red from (<b>a1</b>); and (<b>b2</b>) incident light sensor highlighted in red from (<b>b1</b>).</p>
Full article ">Figure 10
<p>Main phases of the methodology.</p>
Full article ">Figure 11
<p>Methodology’s Phase 1.</p>
Full article ">Figure 12
<p>Methodology’s Phase 2.</p>
Full article ">Figure 13
<p>P4MS mean reflectance values of each four panels from the orthomosaics produced with different radiometric calibration methods. Letter (<b>a</b>) refers to the flight on 10 November 2022 and letter (<b>b</b>) to the flight on 11 November 2022. The number (<b>1</b>) refers to the radiometric correction with P_1, (<b>2</b>) to the radiometric correction with the ILS, and (<b>3</b>) to the radiometric correction with both P_1 and ILS.</p>
Full article ">Figure 14
<p>Altum mean reflectance values of each four panels from the orthomosaics produced with different radiometric calibration methods. Letter (<b>a</b>) refers to the flight on 10 November 2022 and letter (<b>b</b>) to the flight on 11 November 2022. The number (<b>1</b>) refers to the radiometric correction with P_1, (<b>2</b>) to the radiometric correction with the ILS, and (<b>3</b>) to the radiometric correction with both P_1 and ILS.</p>
Full article ">Figure 15
<p>The mean of the differences, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>m</mi> <mi>d</mi> </mrow> <mrow> <mi>s</mi> </mrow> <mrow> <mi>C</mi> <mi>a</mi> <mi>m</mi> <mi>e</mi> <mi>r</mi> <mi>a</mi> <mo>,</mo> <mi>d</mi> <mi>a</mi> <mi>y</mi> </mrow> </msubsup> </mrow> </semantics></math>, between the mean reflectance values and the corresponding laboratory values according to the radiometric method: (<b>a</b>) with panels, (<b>b</b>) with ILS, and (<b>c</b>) with panels and ILS; the UAS and the flight date.</p>
Full article ">Figure 16
<p>Mean of the differences, in %, of the digital numbers (DN) from images acquired by the P4MS (blue colour) and the Altum (orange colour) between the 10 and the 11.</p>
Full article ">Figure 17
<p>Reflectance values of each four panels from the orthomosaics produced with radiometric calibration performed with images at flying height. Letter (<b>a</b>) designates flight from November 10 and letter (<b>b</b>) from 11 November; Number (<b>1</b>) P4MS and (<b>2</b>) Altum.</p>
Full article ">Figure 18
<p>The <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>m</mi> <mi>d</mi> </mrow> <mrow> <mi>s</mi> </mrow> <mrow> <mi>C</mi> <mi>a</mi> <mi>m</mi> <mi>e</mi> <mi>r</mi> <mi>a</mi> <mo>,</mo> <mi>d</mi> <mi>a</mi> <mi>y</mi> </mrow> </msubsup> </mrow> </semantics></math> according to the radiometric calibration using panels only and the images for radiometric correction in-flight per band, UAS, and flight date.</p>
Full article ">Figure 19
<p>The mean reflectance values of the four panels in the orthomosaics produced with calibration images at 1 m high and in-flight per band, UAS ((<b>a</b>) P4MS and (<b>b</b>) Altum), and flight date. The four panels’ mean reflectance values measured in the laboratory are also included.</p>
Full article ">Figure 20
<p>RMSE values for the differences in reflectance, per band, measured at 120 radiometric validation points in orthomosaics produced using the AM and Pix4Dmapper software. The data correspond to images captured with the Altum camera on days 10 and 11. Additionally, RMSE values for the differences in reflectance between orthomosaics produced with Altum images on days 10 and 11 and the Pix4Dmapper are also presented.</p>
Full article ">Figure 21
<p>Frequency distribution of the maps produced with the radiometrically calibrated images acquired with the P4MS camera: reflectance map of the 10 (<b>a</b>); reflectance map of the 11 (<b>b</b>); bands: blue (<b>1</b>), green (<b>2</b>), red (<b>3</b>), red-edge (<b>4</b>) and NIR (<b>5</b>); NDVI (<b>6</b>).</p>
Full article ">Figure 22
<p>Relative air humidity from 12:00 until 15:00 h UTC on November 10 (light blue dots) and November 11 (dark blue dots).</p>
Full article ">Figure 23
<p>Sample of the image of the differences in reflectance, per band, between the orthomosaics of the P4MS of days 10 and 11. Band: (<b>a</b>) blue, (<b>b</b>) green, (<b>c</b>) red, (<b>d</b>) red-edge, and (<b>e</b>) NIR.</p>
Full article ">Figure 24
<p>Image of the differences of the NDVI maps (<b>a</b>) and their frequency distribution (<b>b</b>), produced with the reflectance maps of the P4MS of two consecutive days, 10 and 11.</p>
Full article ">Figure 25
<p>Radiometric validation points distribution over the binary classification maps for days 10 (<b>a</b>) and 11 (<b>b</b>) from P4MS. Value “1” represents NDVI above the 0.5 threshold (vigorous vegetation) and “−1” below the 0.5 threshold (others).</p>
Full article ">Figure 26
<p>Image of the differences of the NDVI maps (<b>a</b>) and their frequency distribution (<b>b</b>), produced with the orthomosaics of the Altum of two consecutive days, 10 and 11.</p>
Full article ">Figure 27
<p>Radiometric validation points distribution over the binary classification maps for days 10 (<b>a</b>) and 11 (<b>b</b>) from Altum. Value “1” represents NDVI above the 0.2 threshold (vigorous vegetation) and “−1” below the 0.2 threshold (others).</p>
Full article ">Figure 28
<p>Overlapping percentage of equal category NDVI pixels between the 10 and the 11 for the P4MS and Altum and between the P4MS and Altum on the 10 and 11.</p>
Full article ">Figure 29
<p>(<b>a1</b>,<b>a2</b>) NDVI of the P4MS camera for days 10 and 11 and their frequency distribution with the statistics mean, median, and standard deviation. (<b>b1</b>,<b>b2</b>) NDVI of the ALTUM camera for days 10 and 11 and their frequency distribution with the statistics mean, median, and standard deviation. (<b>c1</b>,<b>c2</b>) Images of the differences between the NDVI derived from orthomosaics produced with images of the P4MS and of the ALTUM acquired on days 10 and 11 and the frequency distribution of those differences with the statistics mean, median, and standard deviation.</p>
Full article ">Figure 30
<p>Frequency distribution of the orthomosaics with the statistics mean, median, and standard deviation for day 10: (<b>a1</b>) red band of P4MS, (<b>b1</b>) red band of Altum, (<b>a2</b>) NIR band of P4MS, and (<b>b2</b>) NIR band of Altum.</p>
Full article ">Figure 31
<p>Frequency distribution of the orthomosaics with the statistics mean, median, and standard deviation for day 11: (<b>a1</b>) red band of P4 MS, (<b>a2</b>) red band of Altum, (<b>b1</b>) NIR band of P4 MS, and (<b>b2</b>) NIR band of Altum.</p>
Full article ">Figure 32
<p>Overlapping percentage of equal category NDVI pixels between the P4MS and Altum for 10 and 11.</p>
Full article ">
17 pages, 51586 KiB  
Article
Application of Aerial Photographs and Coastal Field Data to Understand Sea Turtle Landing and Spawning Behavior at Kili-Kili Beach, Indonesia
by Arief Darmawan and Satoshi Takewaka
Geographies 2024, 4(4), 781-797; https://doi.org/10.3390/geographies4040043 - 6 Dec 2024
Viewed by 598
Abstract
We investigated sea turtle landing and spawning behavior along 1.4 km of Kili-Kili Beach in East Java, Indonesia, by combining aerial photographs and field survey data. In the study, we surveyed marks of sea turtles landing and spawning on the beach and utilized [...] Read more.
We investigated sea turtle landing and spawning behavior along 1.4 km of Kili-Kili Beach in East Java, Indonesia, by combining aerial photographs and field survey data. In the study, we surveyed marks of sea turtles landing and spawning on the beach and utilized aerial photographs, beach profile survey records, grain size measurements of the beach material, and tide records to understand the behavior of the turtles. Firstly, aerial photographs are processed into ortho-mosaics, and beach surfaces are classified into land cover categories. Then, we calculate the number of spawning and non-spawning instances for each category, visualizing landing positions to identify local concentrations. Spawning distances from the waterline are estimated, and beach stability is evaluated by analyzing the temporal elevation change through standard deviation. Our findings reveal preferred spawning locations on bare sand surfaces, around 8 to 45 m from the waterline, with beach elevations ranging from 1 to 5 m. The standard deviations of beach elevation were between 0.0 and 0.7 m, with a mean slope of 0.07. This information is important for effectively conserving sandy beaches that serve as spawning sites for sea turtles. Full article
Show Figures

Figure 1

Figure 1
<p>Research area: Kili-Kili Beach, Indonesia, and aerial survey sections with their extent.</p>
Full article ">Figure 2
<p>Example of beach profile data of Kili-Kili Beach at cross-section 1 during observation in 2023.</p>
Full article ">Figure 3
<p>The positions of objects to verify the ortho-mosaic alignment.</p>
Full article ">Figure 4
<p>Overall aerial photographs ortho-mosaic of Kili-Kili Beach: (<b>A</b>) 08-04-2023, (<b>B</b>) 20-05-2023, (<b>C</b>) 17-06-2023, (<b>D</b>) 02-07-2023, (<b>E</b>) 19-08-2023, and (<b>F</b>) 29-09-2023.</p>
Full article ">Figure 5
<p>Sea turtle landing records from March to August 2023. The attached number denotes the landing day: DD-MM-YYYY.</p>
Full article ">Figure 6
<p>The overall number of spawning and non-spawning sea turtles from 2023 in the research area.</p>
Full article ">Figure 7
<p>Kili-Kili Beach longshore distribution of sea turtle landings in the area in 2023 (<b>bottom</b>) with spawning or non-spawning positions (<b>upper</b>).</p>
Full article ">Figure 8
<p>Typical vegetation at Kili-Kili Beach.</p>
Full article ">Figure 9
<p>Kili-Kili Beach, around measurement base point 1 (BP1). (<b>A</b>) Coastal land cover type delineated from the 08-04-2023 aerial photograph, (<b>B</b>) 20-05-2023 aerial photograph, (<b>C</b>) 17-06-2023 aerial photograph, (<b>D</b>) 02-07-2023 aerial photograph, and (<b>E</b>) 19-08-2023 and (<b>F</b>) 29-09-2023 aerial photographs.</p>
Full article ">Figure 10
<p>Definition of the distance from the instantaneous shoreline to the spawning (or non-spawning) positions in this study.</p>
Full article ">Figure 11
<p>Sea turtle spawning and non-spawning positions season 2023, the cross-section of the field measurements, sand sampling position, and area 50 m around the measurement cross-section.</p>
Full article ">Figure 12
<p>The pattern of distances from the instantaneous shoreline to the sea turtle spawning and non-spawning positions at Kili-Kili Beach.</p>
Full article ">Figure 13
<p>Beach profile, standard deviation, and spawning and non-spawning positions at Kili-Kili Beach: (<b>A</b>) cross-section 1, (<b>B</b>) cross-section 2, (<b>C</b>) cross-section 3, (<b>D</b>) cross-section 4, (<b>E</b>) cross-section 5, and (<b>F</b>) cross-section 6, and the ratio of missing data in the survey line at the top panel.</p>
Full article ">Figure 13 Cont.
<p>Beach profile, standard deviation, and spawning and non-spawning positions at Kili-Kili Beach: (<b>A</b>) cross-section 1, (<b>B</b>) cross-section 2, (<b>C</b>) cross-section 3, (<b>D</b>) cross-section 4, (<b>E</b>) cross-section 5, and (<b>F</b>) cross-section 6, and the ratio of missing data in the survey line at the top panel.</p>
Full article ">Figure 14
<p>(<b>A</b>) Occurrence of sea turtles spawning and not spawning by beach surface elevation; (<b>B</b>) spawning and non-spawning categorized by beach surface elevation and standard deviation; (<b>C</b>) occurrence of spawning and non-spawning by standard deviation.</p>
Full article ">Figure 15
<p>A series of temporal aerial photographs showing beach surface erosion and its impact on sea turtle landings. (<b>A</b>) 08-04-2023 aerial photograph, (<b>B</b>) enlargement of a portion of 08-04-2023 aerial photograph showing a part of the beach before erosion, (<b>C</b>) 20-05-2023 aerial photograph showing the erosion of the same part of the beach, (<b>D</b>) 17-06-2023 aerial photograph showing the erosion of the same part of the beach at a later date, (<b>E</b>) enlargement of a portion of the 20-05-2023 aerial photograph, and (<b>F</b>) enlargement of a portion of the 17-06-2023 aerial photograph.</p>
Full article ">
13 pages, 7696 KiB  
Article
From Stationary to Nonstationary UAVs: Deep-Learning-Based Method for Vehicle Speed Estimation
by Muhammad Waqas Ahmed, Muhammad Adnan, Muhammad Ahmed, Davy Janssens, Geert Wets, Afzal Ahmed and Wim Ectors
Algorithms 2024, 17(12), 558; https://doi.org/10.3390/a17120558 - 6 Dec 2024
Viewed by 597
Abstract
The development of smart cities relies on the implementation of cutting-edge technologies. Unmanned aerial vehicles (UAVs) and deep learning (DL) models are examples of such disruptive technologies with diverse industrial applications that are gaining traction. When it comes to road traffic monitoring systems [...] Read more.
The development of smart cities relies on the implementation of cutting-edge technologies. Unmanned aerial vehicles (UAVs) and deep learning (DL) models are examples of such disruptive technologies with diverse industrial applications that are gaining traction. When it comes to road traffic monitoring systems (RTMs), the combination of UAVs and vision-based methods has shown great potential. Currently, most solutions focus on analyzing traffic footage captured by hovering UAVs due to the inherent georeferencing challenges in video footage from nonstationary drones. We propose an innovative method capable of estimating traffic speed using footage from both stationary and nonstationary UAVs. The process involves matching each pixel of the input frame with a georeferenced orthomosaic using a feature-matching algorithm. Subsequently, a tracking-enabled YOLOv8 object detection model is applied to the frame to detect vehicles and their trajectories. The geographic positions of these moving vehicles over time are logged in JSON format. The accuracy of this method was validated with reference measurements recorded from a laser speed gun. The results indicate that the proposed method can estimate vehicle speeds with an absolute error as low as 0.53 km/h. The study also discusses the associated problems and constraints with nonstationary drone footage as input and proposes strategies for minimizing noise and inaccuracies. Despite these challenges, the proposed framework demonstrates considerable potential and signifies another step towards automated road traffic monitoring systems. This system enables transportation modelers to realistically capture traffic behavior over a wider area, unlike existing roadside camera systems prone to blind spots and limited spatial coverage. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Showcases the methodological framework of the study.</p>
Full article ">Figure 2
<p>Feature-matching algorithm SIFT applied to input and template image. The highlighted markers depict the key points matched between the two images.</p>
Full article ">Figure 3
<p>Comparison of noisy and EMA-filtered trajectories with different alpha values.</p>
Full article ">Figure 4
<p>The mapped vehicle trajectories before and after EMA application.</p>
Full article ">Figure 5
<p>The fluctuations in velocity (in km/h) over time (in seconds) and the removal of errors using an EMA-based low-pass filter (α = 0.1). The single-point reference speed measured by the speed gun was 26 km/h.</p>
Full article ">Figure 6
<p>The pseudo tracks generated by the object tracking algorithm due to UAV movement.</p>
Full article ">Figure 7
<p>Extreme velocity (km/h) over time (s) with fluctuation resulting from pseudo tracks and their removal from the distance-based movement threshold (after introducing the distance threshold, the first measurement starts at 4.3 s). The single-point reference speed measured by the speed gun was 26 km/h.</p>
Full article ">Figure 8
<p>The method used for determining the positional accuracies of vehicle tracks on (<b>a</b>) tracks from stationary drone footage and (<b>b</b>) tracks from moving drone footage.</p>
Full article ">
16 pages, 21810 KiB  
Article
Enhancing Direct Georeferencing Using Real-Time Kinematic UAVs and Structure from Motion-Based Photogrammetry for Large-Scale Infrastructure
by Soohee Han and Dongyeob Han
Drones 2024, 8(12), 736; https://doi.org/10.3390/drones8120736 - 5 Dec 2024
Viewed by 808
Abstract
The growing demand for high-accuracy mapping and 3D modeling using unmanned aerial vehicles (UAVs) has accelerated advancements in flight dynamics, positioning accuracy, and imaging technology. Structure from motion (SfM), a computer vision-based approach, is increasingly replacing traditional photogrammetry through facilitating the automation of [...] Read more.
The growing demand for high-accuracy mapping and 3D modeling using unmanned aerial vehicles (UAVs) has accelerated advancements in flight dynamics, positioning accuracy, and imaging technology. Structure from motion (SfM), a computer vision-based approach, is increasingly replacing traditional photogrammetry through facilitating the automation of processes such as aerial triangulation (AT), terrain modeling, and orthomosaic generation. This study examines methods to enhance the accuracy of SfM-based AT through real-time kinematic (RTK) UAV imagery, focusing on large-scale infrastructure applications, including a dam and its entire basin. The target area, primarily consisting of homogeneous water surfaces, poses considerable challenges for feature point extraction and image matching, which are crucial for effective SfM. To overcome these challenges and improve the AT accuracy, a constraint equation was applied, incorporating weighted 3D coordinates derived from RTK UAV data. Furthermore, oblique images were combined with nadir images to stabilize AT, and confidence-based filtering was applied to point clouds to enhance geometric quality. The results indicate that assigning appropriate weights to 3D coordinates and incorporating oblique imagery significantly improve the AT accuracy. This approach presents promising advancements for RTK UAV-based AT in SfM-challenging, large-scale environments, thus supporting more efficient and precise mapping applications. Full article
Show Figures

Figure 1

Figure 1
<p>Four scenarios for nadir–oblique combined photography: (<b>a</b>) a single grid for each direction, (<b>b</b>) sequential nadir/2–oblique shots in a double grid, (<b>c</b>) sequential nadir/4–oblique shots in a single grid, and (<b>d</b>) omnidirectional shots in a single grid.</p>
Full article ">Figure 2
<p>Common procedures of SfM.</p>
Full article ">Figure 3
<p>Overview of Site 1 (basemap generated using the V-World API).</p>
Full article ">Figure 4
<p>Overview of Site 2 (basemap generated using the V-World API).</p>
Full article ">Figure 5
<p>Failed images at Site 1: (<b>a</b>) image locations; (<b>b</b>) a sample image.</p>
Full article ">Figure 6
<p>Locations of checkpoints: (<b>a</b>) Site 1; (<b>b</b>) Site 2.</p>
Full article ">Figure 7
<p>Point clouds with error points from a horizontal view at Site 1: (<b>a</b>) before filtering; (<b>b</b>) after filtering.</p>
Full article ">Figure 8
<p>Point clouds with error points from a perspective view at Site 1: (<b>a</b>) before filtering; (<b>b</b>) after filtering.</p>
Full article ">
26 pages, 23951 KiB  
Article
Development of Methods for Satellite Shoreline Detection and Monitoring of Megacusp Undulations
by Riccardo Angelini, Eduard Angelats, Guido Luzi, Andrea Masiero, Gonzalo Simarro and Francesca Ribas
Remote Sens. 2024, 16(23), 4553; https://doi.org/10.3390/rs16234553 - 4 Dec 2024
Viewed by 581
Abstract
Coastal zones, particularly sandy beaches, are highly dynamic environments subject to a variety of natural and anthropogenic forcings. Instantaneous shoreline is a widely used indicator of beach changes in image-based applications, and it can display undulations at different spatial and temporal scales. Megacusps, [...] Read more.
Coastal zones, particularly sandy beaches, are highly dynamic environments subject to a variety of natural and anthropogenic forcings. Instantaneous shoreline is a widely used indicator of beach changes in image-based applications, and it can display undulations at different spatial and temporal scales. Megacusps, periodic seaward and landward shoreline perturbations, are an example of such undulations that can significantly modify beach width and impact its usability. Traditionally, the study of these phenomena relied on video monitoring systems, which provide high-frequency imagery but limited spatial coverage. Instead, this study explored the potential of employing multispectral satellite-derived shorelines, specifically from Sentinel-2 (S2) and PlanetScope (PLN) platforms, for characterizing and monitoring megacusps’ formation and their dynamics over time. First, a tool was developed and validated to guarantee accurate shoreline detection, based on a combination of spectral indices, along with both thresholding and unsupervised clustering techniques. Validation of this shoreline detection phase was performed on three micro-tidal Mediterranean beaches, comparing with high-resolution orthomosaics and in-situ GNSS data, obtaining a good subpixel accuracy (with a mean absolute deviation of 1.5–5.5 m depending on the satellite type). Second, a tool for megacusp characterization was implemented and subsequent validation with reference data proved that satellite-derived shorelines could be used to robustly and accurately describe megacusps. The methodology could not only capture their amplitude and wavelength (of the order of 10 and 100 m, respectively) but also monitor their weekly–daily evolution using different potential metrics, thanks to combining S2 and PLN imagery. Our findings demonstrate that multispectral satellite imagery provides a viable and scalable solution for monitoring shoreline megacusp undulations, enhancing our understanding and offering an interesting option for coastal management. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study areas: (<b>a</b>) Southern Llobregat Delta (SLD) coast (Spain), (<b>b</b>) Northern Ombrone Delta (NOD) coast (Italy) and Feniglia beach (FNG) (Italy). The position of wave buoys and tide gauges is also shown. The coordinate reference system is WGS84.</p>
Full article ">Figure 2
<p>Exampleof the shoreline extraction method with S2 data: (<b>a</b>) raster file of the spectral index (NDWI), (<b>b</b>) binarization of the image with K-means, (<b>c</b>) contour extraction, (<b>d</b>) comparison between the reference shoreline and the detected one, and (<b>e</b>) validation by using the baseline and transect method.</p>
Full article ">Figure 3
<p>Workflow of the shoreline extraction tool and of the megacusp characterization tool.</p>
Full article ">Figure 4
<p>Examples of the shoreline detection phase. Results of the best index-method combination for S2 and PLN compared with the reference, for (<b>a</b>) the SLD coast (23 May 2019), (<b>b</b>) FNG beach (20 July 2021), and (<b>c</b>) the NOD coast (20 July 2021). In the background, orthomosaics closest to the satellite overpass dates are displayed for each beach.</p>
Full article ">Figure 5
<p>First segment of May 2017 used for the validation phase. (<b>a</b>) On top, the four lines correspond to the shorelines of the reference case (green), the best method–index combination in S2 data (GMM–NDWI, orange), the best method–index combination in PLN data (K-means–NIR, grey), and the CoastSat tool (blue). (<b>b</b>) At the bottom, with the same color, the detrended lines show the automatic peaks (red square) and valleys (green dot) for each detected megacusp. The numbers refer to the megacusp embayments that are visible in the orthomosaic. The <span class="html-italic">x</span>-axis is set to zero at the starting point of the segment.</p>
Full article ">Figure 6
<p>Second segment of May 2019 used for the validation phase. (<b>a</b>) On top, the four lines present the shorelines of the reference case (green), the best method–index combination in S2 data (GMM–NDWI, orange), the best method–index combination in PLN data (K-means–NIR, grey), and the CoastSat tool (blue). (<b>b</b>) On the bottom, with the same color, the detrended lines show the automatically detected peaks (red square) and valleys (green dot) for each megacusp. The numbers enumerate the megacusp embayments that are visible in the orthomosaic. The <span class="html-italic">x</span>-axis is set to zero at the starting point of the segment.</p>
Full article ">Figure 7
<p>Time series of the megacusp event in the SLD coast in 2023. On the left, Sentinel-2 images in the period between March and October 2023. On the right, the time series enriched by adding PlanetScope images during the peak of the event (May–June 2023).</p>
Full article ">Figure 8
<p>Results of the 2023 megacusp event in the SLD coast with the corresponding wave conditions. Time series of (<b>a</b>) significant wave height (<math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math>), (<b>b</b>) peak wave period (<math display="inline"><semantics> <msub> <mi>T</mi> <mi>p</mi> </msub> </semantics></math>), (<b>c</b>) direction of wave incidence with respect to the north (<math display="inline"><semantics> <mi>θ</mi> </semantics></math>), (<b>d</b>) sinuosity (<span class="html-italic">s</span>), (<b>e</b>) shoreline standard deviation (<math display="inline"><semantics> <msub> <mi>σ</mi> <mi>s</mi> </msub> </semantics></math>), (<b>f</b>) mean megacusp amplitude (<math display="inline"><semantics> <mover> <mi>a</mi> <mo>¯</mo> </mover> </semantics></math>), and (<b>g</b>) mean megacusp wavelength (<math display="inline"><semantics> <mover> <mi>λ</mi> <mo>¯</mo> </mover> </semantics></math>) are shown.</p>
Full article ">Figure 9
<p>Time series of the megacusp event in FNG beach in 2022. On the left, Sentinel-2 images in the period between February and June 2022. On the right, the time series enriched by adding PlanetScope images at the peak of the event (March 2022).</p>
Full article ">Figure 10
<p>Results of the 2022 megacusp event in FNG beach with the corresponding wave conditions. Time series of (<b>a</b>) significant wave height (<math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math>), (<b>b</b>) peak wave period (<math display="inline"><semantics> <msub> <mi>T</mi> <mi>p</mi> </msub> </semantics></math>), (<b>c</b>) direction of wave incidence to north (<math display="inline"><semantics> <mi>θ</mi> </semantics></math>), (<b>d</b>) sinuosity (<span class="html-italic">s</span>), (<b>e</b>) shoreline standard deviation (<math display="inline"><semantics> <msub> <mi>σ</mi> <mi>s</mi> </msub> </semantics></math>), (<b>f</b>) mean megacusp amplitude (<math display="inline"><semantics> <mover> <mi>a</mi> <mo>¯</mo> </mover> </semantics></math>), and (<b>g</b>) mean megacusp wavelength (<math display="inline"><semantics> <mover> <mi>λ</mi> <mo>¯</mo> </mover> </semantics></math>) are shown.</p>
Full article ">Figure 10 Cont.
<p>Results of the 2022 megacusp event in FNG beach with the corresponding wave conditions. Time series of (<b>a</b>) significant wave height (<math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math>), (<b>b</b>) peak wave period (<math display="inline"><semantics> <msub> <mi>T</mi> <mi>p</mi> </msub> </semantics></math>), (<b>c</b>) direction of wave incidence to north (<math display="inline"><semantics> <mi>θ</mi> </semantics></math>), (<b>d</b>) sinuosity (<span class="html-italic">s</span>), (<b>e</b>) shoreline standard deviation (<math display="inline"><semantics> <msub> <mi>σ</mi> <mi>s</mi> </msub> </semantics></math>), (<b>f</b>) mean megacusp amplitude (<math display="inline"><semantics> <mover> <mi>a</mi> <mo>¯</mo> </mover> </semantics></math>), and (<b>g</b>) mean megacusp wavelength (<math display="inline"><semantics> <mover> <mi>λ</mi> <mo>¯</mo> </mover> </semantics></math>) are shown.</p>
Full article ">
19 pages, 6073 KiB  
Article
Effective UAV Photogrammetry for Forest Management: New Insights on Side Overlap and Flight Parameters
by Atman Dhruva, Robin J. L. Hartley, Todd A. N. Redpath, Honey Jane C. Estarija, David Cajes and Peter D. Massam
Forests 2024, 15(12), 2135; https://doi.org/10.3390/f15122135 - 2 Dec 2024
Viewed by 1092
Abstract
Silvicultural operations such as planting, pruning, and thinning are vital for the forest value chain, requiring efficient monitoring to prevent value loss. While effective, traditional field plots are time-consuming, costly, spatially limited, and rely on assumptions that they adequately represent a wider area. [...] Read more.
Silvicultural operations such as planting, pruning, and thinning are vital for the forest value chain, requiring efficient monitoring to prevent value loss. While effective, traditional field plots are time-consuming, costly, spatially limited, and rely on assumptions that they adequately represent a wider area. Alternatively, unmanned aerial vehicles (UAVs) can cover large areas while keeping operators safe from hazards including steep terrain. Despite their utility, optimal flight parameters to ensure flight efficiency and data quality remain under-researched. This study evaluated the impact of forward and side overlap and flight altitude on the quality of two- and three-dimensional spatial data products from UAV photogrammetry (UAV-SfM) for assessing stand density in a recently thinned Pinus radiata D. Don plantation. A contemporaneously acquired UAV laser scanner (ULS) point cloud provided reference data. The results indicate that the optimal UAV-SfM flight parameters are 90% forward and 85% side overlap at a 120 m altitude. Flights at an 80 m altitude offered marginal resolution improvement (2.2 cm compared to 3.2 cm ground sample distance/GSD) but took longer and were more error-prone. Individual tree detection (ITD) for stand density assessment was then applied to both UAV-SfM and ULS canopy height models (CHMs). Manual cleaning of the detected ULS tree peaks provided ground truth for both methods. UAV-SfM had a lower recall (0.85 vs. 0.94) but a higher precision (0.97 vs. 0.95) compared to ULS. Overall, the F-score indicated no significant difference between a prosumer-grade photogrammetric UAV and an industrial-grade ULS for stand density assessments, demonstrating the efficacy of affordable, off-the-shelf UAV technology for forest managers. Furthermore, in addressing the knowledge gap regarding optimal UAV flight parameters for conducting operational forestry assessments, this study provides valuable insights into the importance of side overlap for orthomosaic quality in forest environments. Full article
(This article belongs to the Special Issue Image Processing for Forest Characterization)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study site within NZ (insert), with the optimal orthomosaic overlaid on a mixed topographic/aerial image of the region. The stand boundary is indicated by the purple line, GCP locations are in orange, and the take-off location is in cyan.</p>
Full article ">Figure 2
<p>Image of a GCP coated with reflective material and painted in a high contrast pattern for identification in the ULS and UAV-SfM datasets.</p>
Full article ">Figure 3
<p>Images of the UAVs utilised in this study: (<b>a</b>) the DJI Phantom 4 Pro; (<b>b</b>) the DJI Matrice 300 RTK with a DJI L1 sensor. (<b>c</b>) Shows the flight crew operating UAVs from an MEWP to maintain VLOS.</p>
Full article ">Figure 4
<p>Examples of image artefacts encountered when annotating the orthomosaics: (<b>a</b>,<b>d</b>,<b>e</b>) “blurring” and “smudging” effects; (<b>b</b>,<b>d</b>) “tearing” or “breaking” discontinuities within the image; (<b>c</b>,<b>e</b>) “ghosting”, in which the displacement of ground and canopy pixels results in transparent tree canopies; (<b>d</b>,<b>f</b>) in some areas tree canopies were fragmented into smaller chunks.</p>
Full article ">Figure 5
<p>Bar graph plotting the percentage area of each orthomosaic free from artefacts for each flight plan, coloured by altitude. The missions are arranged by overlap with the lowest on the left and highest on the right.</p>
Full article ">Figure 6
<p>Comparison of flight time duration between different overlap flight missions, coloured by altitude.</p>
Full article ">Figure 7
<p>Correlations between (<b>a</b>) forward overlap and (<b>b</b>) side overlap with the area of each orthomosaic that was clear of artefacts. The teal line represents the linear model between variables, and point locations are jittered so that multiple points with the same value are visible.</p>
Full article ">Figure 8
<p>Calculated relief displacement for an object (e.g., a tree) of height (<span class="html-italic">h</span>) 20 m, within imagery captured at a flying height (<span class="html-italic">H</span>) of 80 m (purple vectors) or 120 m (yellow vectors) above ground level. The displacement vectors in the legend are scaled to an image distance of 100 mm. Relief displacement vectors are plotted on an image captured at ~80 m AGL and at image radial distances of 0, 100, 200, 300, 400, 500, 650, and 900 mm from the principal point, which for this illustration is assumed to coincide with the image centre.</p>
Full article ">Figure 9
<p>Demonstration of the greater impact of side (dashed purple line) than forward (dotted green line) overlap on movement of the near-nadir viewing region of an image (solid teal line). Movement values are based on P4 Pro camera at a height of 120 m, with an image footprint of 180 × 120 m.</p>
Full article ">Figure 10
<p>The effects of shadow and occlusions in the orthomosaics produced by flights flown at overlap of 90:85 at 80 m (<b>a</b>) and 90:90 at 80 m (<b>b</b>).</p>
Full article ">
26 pages, 10461 KiB  
Article
Accuracy and Precision of Shallow-Water Photogrammetry from the Sea Surface
by Elisa Casella, Giovanni Scicchitano and Alessio Rovere
Remote Sens. 2024, 16(22), 4321; https://doi.org/10.3390/rs16224321 - 19 Nov 2024
Viewed by 1150
Abstract
Mapping shallow-water bathymetry and morphology represents a technical challenge. In fact, acoustic surveys are limited by water depths reachable by boat, and airborne surveys have high costs. Photogrammetric approaches (either via drone or from the sea surface) have opened up the possibility to [...] Read more.
Mapping shallow-water bathymetry and morphology represents a technical challenge. In fact, acoustic surveys are limited by water depths reachable by boat, and airborne surveys have high costs. Photogrammetric approaches (either via drone or from the sea surface) have opened up the possibility to perform shallow-water surveys easily and at accessible costs. This work presents a simple, low-cost, and highly portable platform that allows gathering sequential photos and echosounder depth values of shallow-water sites (up to 5 m depth). The photos are then analysed in conjunction with photogrammetric techniques to obtain digital bathymetric models and orthomosaics of the seafloor. The workflow was tested on four repeated surveys of the same area in the Western Mediterranean and allowed obtaining digital bathymetric models with centimetric average accuracy and precision and root mean square errors within a few decimetres. The platform presented in this work can be employed to obtain first-order bathymetric products, enabling the contextual establishment of the depth accuracy of the final products. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Map of the Italian Peninsula. The star indicates the study site. Site where the test area (dashed line) is located as seen in: (<b>B</b>) orthomosaic of the area (Background image from Google Earth, 2022) and (<b>C</b>) oblique drone photo.</p>
Full article ">Figure 2
<p>Field setup used in this study. An operator working in snorkelling is dragging a diver’s buoy on top of which are fixed a dry case with a GNSS receiver (1) and a mobile phone (2). Fixed on the underwater part of the diver’s buoy are located a GoPro camera (3) and a portable echosounder (4). See text for details. The drawing is not to scale.</p>
Full article ">Figure 3
<p>Example of results obtained using the workflow outlined in the main text. (<b>A</b>) grid pattern followed by the snorkelling operator. (<b>B</b>) Orthomosaic (with hillshade in the background). (<b>C</b>) Digital bathymetric model (DBM) and echosounder points. Panels A, B, and C refer to the survey performed on the 13 August 2020. The same results for all surveys are shown in <a href="#remotesensing-16-04321-f0A2" class="html-fig">Figure A2</a>. (<b>D</b>–<b>G</b>) show an example of a picture for each survey date. The location pin (also shown in panel B) helps orient the image and place it in the reconstructed scene.</p>
Full article ">Figure 4
<p>Percentage of points and corresponding confidence calculated by Agisoft Metashape. Note that the surveys of 28 July and 13 August have higher confidence than the other two surveys, for which fewer photos were aligned by the program.</p>
Full article ">Figure 5
<p>Histograms showing the depth differences between DBM depths and control echosounder points (that represent the accuracy of each DBM), with average difference and RMSE for each survey date (panels <b>A</b>–<b>D</b>). For a plot of echosounder depths versus DBM depths, see <a href="#remotesensing-16-04321-f0A4" class="html-fig">Figure A4</a>.</p>
Full article ">Figure A1
<p>(<b>A</b>) Screenshot of the echosounder during data collection. The upper part shows the map location, while the lower part shows the sonogram surveyed by the echosounder. (<b>B</b>) Picture of the GNSS screen. This data is needed to syncronise the pictures taken with the GoPro camera with GNSS time.</p>
Full article ">Figure A2
<p>Same as in <a href="#remotesensing-16-04321-f003" class="html-fig">Figure 3</a>, but for all survey dates. The orthomosaics and DBMs shown here are not aligned to the 13 August one.</p>
Full article ">Figure A3
<p>Heatmap showing the RMSE between echosounder control points and DBM depths divided by survey date and depth bin. Darker blue colors represent higher RMSE.</p>
Full article ">Figure A4
<p>Scatterplots of DBM depths (x-axis) versus echosounder points depth (y-axis) for each survey date (panels <b>A</b>–<b>D</b>).</p>
Full article ">Figure A5
<p>Maps of the differences between DBMs from surveys performed on different dates.</p>
Full article ">Figure A6
<p>Histograms showing the differences between DBMs from surveys performed on different dates.</p>
Full article ">
15 pages, 6065 KiB  
Article
Assessment of UAV-Based Deep Learning for Corn Crop Analysis in Midwest Brazil
by José Augusto Correa Martins, Alberto Yoshiriki Hisano Higuti, Aiesca Oliveira Pellegrin, Raquel Soares Juliano, Adriana Mello de Araújo, Luiz Alberto Pellegrin, Veraldo Liesenberg, Ana Paula Marques Ramos, Wesley Nunes Gonçalves, Diego André Sant’Ana, Hemerson Pistori and José Marcato Junior
Agriculture 2024, 14(11), 2029; https://doi.org/10.3390/agriculture14112029 - 11 Nov 2024
Viewed by 672
Abstract
Crop segmentation, the process of identifying and delineating agricultural fields or specific crops within an image, plays a crucial role in precision agriculture, enabling farmers and public managers to make informed decisions regarding crop health, yield estimation, and resource allocation in Midwest Brazil. [...] Read more.
Crop segmentation, the process of identifying and delineating agricultural fields or specific crops within an image, plays a crucial role in precision agriculture, enabling farmers and public managers to make informed decisions regarding crop health, yield estimation, and resource allocation in Midwest Brazil. The crops (corn) in this region are being damaged by wild pigs and other diseases. For the quantification of corn fields, this paper applies novel computer-vision techniques and a new dataset of corn imagery composed of 1416 256 × 256 images and corresponding labels. We flew nine drone missions and classified wild pig damage in ten orthomosaics in different stages of growth using semi-automatic digitizing and deep-learning techniques. The period of crop-development analysis will range from early sprouting to the start of the drying phase. The objective of segmentation is to transform or simplify the representation of an image, making it more meaningful and easier to interpret. For the objective class, corn achieved an IoU of 77.92%, and for background 83.25%, using DeepLabV3+ architecture, 78.81% for corn, and 83.73% for background using SegFormer architecture. For the objective class, the accuracy metrics were achieved at 86.88% and for background 91.41% using DeepLabV3+, 88.14% for the objective, and 91.15% for background using SegFormer. Full article
(This article belongs to the Special Issue Applications of Remote Sensing in Agricultural Soil and Crop Mapping)
Show Figures

Figure 1

Figure 1
<p>Fluxogram presenting the phases of the workflow.</p>
Full article ">Figure 2
<p>Location map of the study area and mosaics used for the research.</p>
Full article ">Figure 3
<p>The training losses of the pattern recognition CNN plotted as a function of iteration for 40,000 epochs.</p>
Full article ">
15 pages, 3402 KiB  
Article
Multispectral UAV-Based Disease Identification Using Vegetation Indices for Maize Hybrids
by László Radócz, Csaba Juhász, András Tamás, Árpád Illés, Péter Ragán and László Radócz
Agriculture 2024, 14(11), 2002; https://doi.org/10.3390/agriculture14112002 - 7 Nov 2024
Viewed by 798
Abstract
In the future, the cultivation of maize will become more and more prominent. As the world’s demand for food and animal feeding increases, remote sensing technologies (RS technologies), especially unmanned aerial vehicles (UAVs), are developing more and more, and the usability of the [...] Read more.
In the future, the cultivation of maize will become more and more prominent. As the world’s demand for food and animal feeding increases, remote sensing technologies (RS technologies), especially unmanned aerial vehicles (UAVs), are developing more and more, and the usability of the cameras (Multispectral-MS) installed on them is increasing, especially for plant disease detection and severity observations. In the present research, two different maize hybrids, P9025 and sweet corn Dessert R78 (CS hybrid), were employed. Four different treatments were performed with three different doses (low, medium, and high dosage) of infection with corn smut fungus (Ustilago maydis [DC] Corda). The fields were monitored two times after the inoculation—20 DAI (days after inoculation) and 27 DAI. The orthomosaics were created in WebODM 2.5.2 software and the study included five vegetation indices (NDVI [Normalized Difference Vegetation Index], GNDVI [Green Normalized Difference Vegetation Index], NDRE [Normalized Difference Red Edge], LCI [Leaf Chlorophyll Index] and ENDVI [Enhanced Normalized Difference Vegetation Index]) with further analysis in QGIS. The gathered data were analyzed using R-based Jamovi 2.6.13 software with different statistical methods. In the case of the sweet maize hybrid, we obtained promising results, as follows: the NDVI values of CS 0 were significantly higher than the high-dosed infection CS 10.000 with a mean difference of 0.05422 *** and a p value of 4.43 × 10−5 value, suggesting differences in all of the levels of infection. Furthermore, we investigated the correlations of the vegetation indices (VI) for the Dessert R78, where NDVI and GNDVI showed high correlations. NDVI had a strong correlation with GNDVI (r = 0.83), a medium correlation with LCI (r = 0.56) and a weak correlation with NDRE (r = 0.419). There was also a strong correlation between LCI and GNDVI, with r = 0.836. NDRE and GNDVI indices had the correlation coefficients with a CCoeff. of r = 0.716. For hybrid separation analyses, useful results were obtained for NDVI and ENDVI as well. Full article
(This article belongs to the Section Crop Production)
Show Figures

Figure 1

Figure 1
<p>Map of Hungary and Subregion Hajdú-Bihar county with subregion and city of Debrecen where the experiment was set up.</p>
Full article ">Figure 2
<p>Research field of the University of Debrecen. Each red rectangle represents a parcel, including its numbers. (DJI Phantom 4 MS multispectral made in China by Shenzhen DJI Sciences and Technologies Ltd., Shenzhen, China (the recording contains all six channels) orthomosaic in the environment of QGIS 3.360).</p>
Full article ">Figure 3
<p>Research field NDVI map (QGIS environment). The filtered version contains only maize plants pixel on the colored NDVI channel.</p>
Full article ">Figure 4
<p>Symptoms of the <span class="html-italic">Ustilago maydis</span> (DC) Corda three weeks after the infection (left picture, stalk symptom; right picture, leaf symptoms); hlorosis and cell destruction mainly seen on the plant tissues.</p>
Full article ">Figure 5
<p>Data distribution in histogram with density for the five VIs. NDVI, GNDVI, ENDVI, NDRE and LCI in sweet maize Dessert R78 hybrid. H + IL stands for hybrid and infection level.</p>
Full article ">Figure 6
<p>Correlation matrix of the five different VIs. Note: * <span class="html-italic">p</span> &lt; 0.05, ** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001, one-tailed.</p>
Full article ">Figure 7
<p>Data distribution in histogram with density for the five VIs. NDVI, GNDVI, ENDVI, NDRE, and LCI in forage maize P9025 hybrid. H + IL stands for hybrid and infection level.</p>
Full article ">Figure 8
<p>Box plot and data distribution VI. NDVI for P9025 (T) and Dessert R78 (CS).</p>
Full article ">Figure 9
<p>Box plot and data distribution VI. ENDVI for P9025 (T) and Dessert R78 (CS).</p>
Full article ">
22 pages, 4356 KiB  
Article
Using Unmanned Aerial Systems Technology to Characterize the Dynamics of Small-Scale Maize Production Systems for Precision Agriculture
by Andrew Manu, Joshua McDanel, Daniel Brummel, Vincent Kodjo Avornyo and Thomas Lawler
Drones 2024, 8(11), 633; https://doi.org/10.3390/drones8110633 - 1 Nov 2024
Viewed by 1022
Abstract
Precision agriculture (PA) utilizes spatial and temporal variability to improve the sustainability and efficiency of farming practices. This study used high-resolution imagery from UAS to evaluate maize yield variability across three fields in Ghana: Sombolouna, Tilli, and Yendi, exploiting the potential of UAS [...] Read more.
Precision agriculture (PA) utilizes spatial and temporal variability to improve the sustainability and efficiency of farming practices. This study used high-resolution imagery from UAS to evaluate maize yield variability across three fields in Ghana: Sombolouna, Tilli, and Yendi, exploiting the potential of UAS technology in PA. Initially, excess green index (EGI) classification was used to differentiate between bare soil, dead vegetation, and thriving vegetation, including maize and weeds. Thriving vegetation was further classified into maize and weeds, and their corresponding rasters were developed. Normal difference red edge (NDRE) was applied to assess maize health. The Jenks natural breaks algorithm classified maize rasters into low, medium, and high differential yield zones (DYZs). The percentage of bare spaces, maize, weed coverages, and total maize production was determined. Significant variations in field conditions showed Yendi had 34% of its field as bare, Tilli had the highest weed coverage at 22%, and Sombolouna had the highest maize crop coverage at 73.9%. Maize yields ranged from 860 kg ha−1 in the low DYZ to 4900 kg ha−1 in the high DYZ. Although yields in Sombolouna and Tilli were similar, both fields significantly outperformed Yendi. Scenario analysis suggested that enhancing management practices to elevate low DYZs to medium levels could increase production by 2.1%, while further improvements to raise low and medium DYZs to high levels could boost productivity by up to 20%. Full article
(This article belongs to the Special Issue UAS in Smart Agriculture: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Location of maize production sites (red pins) that served as project sites: Sombolouna and Tilli in the Upper East Region of Ghana and Yendi in the Northern Region.</p>
Full article ">Figure 2
<p>Illustration of how DYZs were generalized. Blue ovals are input layers not produced in the generalization process, yellow rectangles are the tools used in ArcGIS Pro, and green ovals are layers produced by the tools used. Arrows display the workflow used to create the final generalized zones.</p>
Full article ">Figure 3
<p>High-resolution RGB orthomosaics of maize production fields at (<b>a</b>) Sombolouna, (<b>b</b>) Tilli, and (<b>c</b>) Yendi, captured using unmanned aerial systems technology at the R2 stage of the maize crop: (Coordinate System: WGS 1984 WGS 1984 UTM Zone N; Projection: Transverse Mercator: Datum: WGS1984).</p>
Full article ">Figure 4
<p>Detailed (high resolution) NDRE differential yield zone maps: (<b>a</b>) Sombolouna, (<b>b</b>) Tilli, and (<b>c</b>) Yendi, showing low-, medium-, and high-yielding areas at the R2 stage of the maize crop.</p>
Full article ">Figure 5
<p>Generalized maps of fields at (<b>a</b>) Sombolouna, (<b>b</b>) Tilli, and (<b>c</b>) Yendi, showing low, medium, and high differential yield zones at the R2 stage of the maize crop.</p>
Full article ">Figure 6
<p>Percent land coverage of high, medium, and zones at Sombolouna, Tilli, and Yendi. According to Tukey’s studentized range test, bars of the same texture and color with the same letter are not significantly different at the 0.05 level.</p>
Full article ">Figure 7
<p>Percentage of bare areas in (<b>a</b>) production fields and (<b>b</b>) differential yield zones within fields. Using Turkey’s studentized range test, bars with the same letters are not significantly different at the 0.05 level.</p>
Full article ">Figure 8
<p>Maize and weed coverages at Sombolouna, Tilli, and Yendi. Using Turkey’s studentized range test, bars with the same letters are not significantly different at the 0.05 level.</p>
Full article ">Figure 9
<p>Vegetation distribution and its partition into maize and weeds as a function of zones. Using Turkey’s studentized range test, bars with the same letters are not significantly different at the 0.05 level.</p>
Full article ">Figure 10
<p>Plant density (plants/ha) production in fields (<b>a</b>) and (<b>b</b>) in differential yield zones within fields. Using Turkey’s studentized range test, bars with the same letters are not significantly different at the 0.05 level.</p>
Full article ">Figure 11
<p>Maize grain production in (<b>a</b>) differential yield zones and (<b>b</b>) in production fields. Using Turkey’s studentized range test, bars with the same letters are not significantly different at the 0.05 level.</p>
Full article ">
18 pages, 3655 KiB  
Article
Investigating the Role of Cover-Crop Spectra for Vineyard Monitoring from Airborne and Spaceborne Remote Sensing
by Michael Williams, Niall G. Burnside, Matthew Brolly and Chris B. Joyce
Remote Sens. 2024, 16(21), 3942; https://doi.org/10.3390/rs16213942 - 23 Oct 2024
Viewed by 747
Abstract
The monitoring of grape quality parameters within viticulture using airborne remote sensing is an increasingly important aspect of precision viticulture. Airborne remote sensing allows high volumes of spatial consistent data to be collected with improved efficiency over ground-based surveys. Spectral data can be [...] Read more.
The monitoring of grape quality parameters within viticulture using airborne remote sensing is an increasingly important aspect of precision viticulture. Airborne remote sensing allows high volumes of spatial consistent data to be collected with improved efficiency over ground-based surveys. Spectral data can be used to understand the characteristics of vineyards, including the characteristics and health of the vines. Within viticultural remote sensing, the use of cover-crop spectra for monitoring is often overlooked due to the perceived noise it generates within imagery. However, within viticulture, the cover crop is a widely used and important management tool. This study uses multispectral data acquired by a high-resolution uncrewed aerial vehicle (UAV) and Sentinel-2 MSI to explore the benefit that cover-crop pixels could have for grape yield and quality monitoring. This study was undertaken across three growing seasons in the southeast of England, at a large commercial wine producer. The site was split into a number of vineyards, with sub-blocks for different vine varieties and rootstocks. Pre-harvest multispectral UAV imagery was collected across three vineyard parcels. UAV imagery was radiometrically corrected and stitched to create orthomosaics (red, green, and near-infrared) for each vineyard and survey date. Orthomosaics were segmented into pure cover-cropuav and pure vineuav pixels, removing the impact that mixed pixels could have upon analysis, with three vegetation indices (VIs) constructed from the segmented imagery. Sentinel-2 Level 2a bottom of atmosphere scenes were also acquired as close to UAV surveys as possible. In parallel, the yield and quality surveys were undertaken one to two weeks prior to harvest. Laboratory refractometry was performed to determine the grape total acid, total soluble solids, alpha amino acids, and berry weight. Extreme gradient boosting (XGBoost v2.1.1) was used to determine the ability of remote sensing data to predict the grape yield and quality parameters. Results suggested that pure cover-cropuav was a successful predictor of grape yield and quality parameters (range of R2 = 0.37–0.45), with model evaluation results comparable to pure vineuav and Sentinel-2 models. The analysis also showed that, whilst the structural similarity between the both UAV and Sentinel-2 data was high, the cover crop is the most influential spectral component within the Sentinel-2 data. This research presents novel evidence for the ability of cover-cropuav to predict grape yield and quality. Moreover, this finding then provides a mechanism which explains the success of the Sentinel-2 modelling of grape yield and quality. For growers and wine producers, creating grape yield and quality prediction models through moderate-resolution satellite imagery would be a significant innovation. Proving more cost-effective than UAV monitoring for large vineyards, such methodologies could also act to bring substantial cost savings to vineyard management. Full article
Show Figures

Figure 1

Figure 1
<p>(Subfigures (<b>A</b>): top left, (<b>B</b>): right, (<b>C</b>): bottom left): Three sample vineyards, (<b>A</b>): 62 vine rows, study area of 20,000 m<sup>2</sup>; (<b>B</b>): 8 vine rows, centre coordinates 51.047926, 0.789715, study area of 10,000 m<sup>2</sup>; (<b>C</b>): 25 vine rows, centre coordinates, study area of 15,400 m<sup>2</sup>.</p>
Full article ">Figure 2
<p>(<b>Left</b>): an example of a sampling strategy at Bottom Camp, (<b>Right</b>): example of vine and cover-crop segmentation at the individual block of ten vines.</p>
Full article ">Figure 3
<p>Near-infrared (NIR), red, and green bandwidths from UAV (<b>top</b>) and Sentinel-2 (<b>bottom</b>) were used in this study. The different resolutions between the platforms is evident, with the vine rows indiscernible within Sentinel-2 imagery.</p>
Full article ">Figure 4
<p>Sentinel-2 XGBoost regression outputs for four-grape yield and quality parameters (<b>a</b>): total acid; (<b>b</b>): total soluble solids; (<b>c</b>): alpha amino acids; and (<b>d</b>): berry weight. Results plot the predicted <span class="html-italic">Y</span> variable from remote sensing data against the observed laboratory-derived quality parameter. Shown with 95% confidence intervals.</p>
Full article ">Figure 5
<p>UAV vine XGBoost regression outputs for four grape yield and quality parameters ((<b>a</b>): total acid, (<b>b</b>): total soluble solids, (<b>c</b>): alpha amino acids, and (<b>d</b>): berry weight). Results plot the predicted <span class="html-italic">Y</span> variable from remote sensing data against the observed laboratory-derived quality parameter. Shown with 95% confidence intervals.</p>
Full article ">Figure 6
<p>UAV-derived cover-crop XGBoost regression outputs for four grape yield and quality parameters ((<b>a</b>): total acid, (<b>b</b>): total soluble solids, (<b>c</b>): alpha amino acids, and (<b>d</b>): berry weight). Results plot the predicted <span class="html-italic">Y</span> variable from remote sensing data against the observed laboratory-derived quality parameter. Shown with 95% confidence intervals.</p>
Full article ">Figure 7
<p>Structure similarity index and difference between Sentinel-2 (S2) near-infrared (NIR) and uncrewed aerial vehicle (UAV) NIR data at Butness (BT).</p>
Full article ">Figure 8
<p>Structure similarity index and difference between Sentinel-2 (S2) near-infrared (NIR) and unmanned aircraft vehicle (UAV) NIR data at Bottom Camp (BC).</p>
Full article ">Figure 9
<p>Structure similarity index and difference between Sentinel-2 (S2) near-infrared (NIR) and uncrewed aerial vehicle (UAV) NIR data at Boothill (BH).</p>
Full article ">Figure 10
<p>The relationship between S2 NDVI with vine<sub>uav</sub> NDVI (<b>a</b>) and cover-crop<sub>uav</sub> NDVI (<b>b</b>) with data points classified by sample vineyard: Bottom Camp (bc), Boothill (bh), and Butness (bt).</p>
Full article ">
23 pages, 16662 KiB  
Article
Evaluating Burn Severity and Post-Fire Woody Vegetation Regrowth in the Kalahari Using UAV Imagery and Random Forest Algorithms
by Madeleine Gillespie, Gregory S. Okin, Thoralf Meyer and Francisco Ochoa
Remote Sens. 2024, 16(21), 3943; https://doi.org/10.3390/rs16213943 - 23 Oct 2024
Viewed by 1010
Abstract
Accurate burn severity mapping is essential for understanding the impacts of wildfires on vegetation dynamics in arid savannas. The frequent wildfires in these biomes often cause topkill, where the vegetation experiences above-ground combustion but the below-ground root structures survive, allowing for subsequent regrowth [...] Read more.
Accurate burn severity mapping is essential for understanding the impacts of wildfires on vegetation dynamics in arid savannas. The frequent wildfires in these biomes often cause topkill, where the vegetation experiences above-ground combustion but the below-ground root structures survive, allowing for subsequent regrowth post-burn. Investigating post-fire regrowth is crucial for maintaining ecological balance, elucidating fire regimes, and enhancing the knowledge base of land managers regarding vegetation response. This study examined the relationship between bush burn severity and woody vegetation post-burn coppicing/regeneration events in the Kalahari Desert of Botswana. Utilizing UAV-derived RGB imagery combined with a Random Forest (RF) classification algorithm, we aimed to enhance the precision of burn severity mapping at a fine spatial resolution. Our research focused on a 1 km2 plot within the Modisa Wildlife Reserve, extensively burnt by the Kgalagadi Transfrontier Fire of 2021. The UAV imagery, captured at various intervals post-burn, provided detailed orthomosaics and canopy height models, facilitating precise land cover classification and burn severity assessment. The RF model achieved an overall accuracy of 79.71% and effectively identified key burn severity indicators, including green vegetation, charred grass, and ash deposits. Our analysis revealed a >50% probability of woody vegetation regrowth in high-severity burn areas six months post-burn, highlighting the resilience of these ecosystems. This study demonstrates the efficacy of low-cost UAV photogrammetry for fine-scale burn severity assessment and provides valuable insights into post-fire vegetation recovery, thereby aiding land management and conservation efforts in savannas. Full article
Show Figures

Figure 1

Figure 1
<p>Kgalagadi Transfrontier Fire (KTF) extent and location in Botswana. Modisa is indicated by a red star in the left panel of the figure. The natural color satellite imagery of the Kgalagadi Transfrontier Fire in the left panel was acquired by the National Aeronautics and Space Administration’s (NASA, Washington, DC, USA) Moderate Resolution Imaging Spectroradiometer (MODIS, Washinton, DC, USA) from its Aqua satellite on 8 September 2021 at a 250-m resolution.</p>
Full article ">Figure 2
<p>The top left corner depicts the 1 sq. km post-burn plot of land that this study primarily focused on. The bottom panel offers a closer look at the burn impacts within the plot of land. The top right corner displays the location of the study site in Botswana, Modisa and is indicated by a red star.</p>
Full article ">Figure 3
<p>Flow chart showing the steps of the burn severity classification model along with the datasets and software used. R: red; G: green; B: blue; GLCM: gray-level co-occurrence matrix; UAS: unmanned aerial system.</p>
Full article ">Figure 4
<p>Visualizations of land cover classification schema and their corresponding burn severity rankings.</p>
Full article ">Figure 5
<p>Original RGB drone images (<b>left</b>) and the manually classified land cover classifications (<b>right</b>).</p>
Full article ">Figure 6
<p>Visual comparison of 12-h post-burn imagery and 6-month post-burn imagery. Woody vegetation regrowth visualization is defined and compared to herbaceous cover, as indicated by the red box outlines. Regrowth was determined based on patch regrowth rather than analysis at the pixel level.</p>
Full article ">Figure 7
<p><b>Top Panel</b>: Original drone image 12 h post burn (<b>left</b>) and the Random Forest model-predicted land cover classification map (<b>right</b>). Three outlined regions (A, B, C) are indicated. <b>Bottom Panels</b>: The zoomed-in regions from the model-predicted map and the original RGB map for better visualization.</p>
Full article ">Figure 8
<p>Random Forest classification results reclassified to represent burn severity rankings.</p>
Full article ">Figure 9
<p>Confusion matrix for Random Forest classification of burn severity. Each cell shows the proportion of observations predicted versus the actual observed categories, highlighting the model’s precision and misclassification rates. Numerical values and gradient color of the cells represent the normalized value of correct pixel predictions.</p>
Full article ">Figure 10
<p>Feature importance within the Random Forest classification model. CHM = Canopy Height Model; RGB = Red band, green band, blue band; GCC = Green Chromatic Coordinate; CI = Char Index; max_diff = Max Difference Index; EGI = Excessive Greenness Index; BI = Brightness Index.</p>
Full article ">Figure 11
<p>Woody vegetation survival and regrowth. This figure presents the probability of survival and regrowth in woody vegetation at 6 months and 2.5 years post-burn across the 1000 derived Monte Carlo outputs. Mean probabilities and standard deviations are calculated for each category. Wider violin plots indicate a higher likelihood of regrowth, while narrower plots suggest a lower likelihood.</p>
Full article ">Figure 12
<p>Sample of RGB images used within the manual classification dataset and their corresponding CHM in meters. White spots within the CHM are indicative of taller vegetation.</p>
Full article ">
22 pages, 11338 KiB  
Article
Estimating Carbon Stock in Unmanaged Forests Using Field Data and Remote Sensing
by Thomas Leditznig and Hermann Klug
Remote Sens. 2024, 16(21), 3926; https://doi.org/10.3390/rs16213926 - 22 Oct 2024
Viewed by 1182
Abstract
Unmanaged forest ecosystems play a critical role in addressing the ongoing climate and biodiversity crises. As there is no commercial interest in monitoring the health and development of such inaccessible habitats, low-cost assessment approaches are needed. We used a method combining RGB imagery [...] Read more.
Unmanaged forest ecosystems play a critical role in addressing the ongoing climate and biodiversity crises. As there is no commercial interest in monitoring the health and development of such inaccessible habitats, low-cost assessment approaches are needed. We used a method combining RGB imagery acquired using an Unmanned Aerial Vehicle (UAV), Sentinel-2 data, and field surveys to determine the carbon stock of an unmanaged forest in the UNESCO World Heritage Site wilderness area Dürrenstein-Lassingtal in Austria. The entry-level consumer drone (DJI Mavic Mini) and freely available Sentinel-2 multispectral datasets were used for the evaluation. We merged the Sentinel-2 derived vegetation index NDVI with aerial photogrammetry data and used an orthomosaic and a Digital Surface Model (DSM) to map the extent of woodland in the study area. The Random Forest (RF) machine learning (ML) algorithm was used to classify land cover. Based on the acquired field data, the average carbon stock per hectare of forest was determined to be 371.423 ± 51.106 t of CO2 and applied to the ML-generated class Forest. An overall accuracy of 80.8% with a Cohen’s kappa value of 0.74 was achieved for the land cover classification, while the carbon stock of the living above-ground biomass (AGB) was estimated with an accuracy within 5.9% of field measurements. The proposed approach demonstrated that the combination of low-cost remote sensing data and field work can predict above-ground biomass with high accuracy. The results and the estimation error distribution highlight the importance of accurate field data. Full article
Show Figures

Figure 1

Figure 1
<p>A map of the study area.</p>
Full article ">Figure 2
<p>Sample plots for the in situ measurements.</p>
Full article ">Figure 3
<p>The generated DSM and orthomosaic with a resolution of 3 cm.</p>
Full article ">Figure 4
<p>NDVI indices with a resolution of 10 m.</p>
Full article ">Figure 5
<p>Training samples.</p>
Full article ">Figure 6
<p>The reference dataset and validation points (blue) for the Image Classification Wizard.</p>
Full article ">Figure 7
<p>Tree species found in the field plots.</p>
Full article ">Figure 8
<p>The classification raster of the study area.</p>
Full article ">Figure 9
<p>A comparison of the land cover classification and the orthomosaic.</p>
Full article ">Figure 10
<p>The Q-Q plot of the distribution of the carbon stock estimation errors.</p>
Full article ">
Back to TopTop