[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (323)

Search Parameters:
Keywords = very high resolution satellite imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2722 KiB  
Article
Evaluation of Sentinel-2 Deep Resolution 3.0 Data for Winter Crop Identification and Organic Barley Yield Prediction
by Milen Chanev, Ilina Kamenova, Petar Dimitrov and Lachezar Filchev
Remote Sens. 2025, 17(6), 957; https://doi.org/10.3390/rs17060957 (registering DOI) - 8 Mar 2025
Viewed by 3
Abstract
Barley is an ecologically adaptable crop widely used in agriculture and well suited for organic farming. Satellite imagery from Sentinel-2 can support crop monitoring and yield prediction, optimising production processes. This study compares two types of Sentinel-2 data—standard (S2) data with 10 m [...] Read more.
Barley is an ecologically adaptable crop widely used in agriculture and well suited for organic farming. Satellite imagery from Sentinel-2 can support crop monitoring and yield prediction, optimising production processes. This study compares two types of Sentinel-2 data—standard (S2) data with 10 m and 20 m resolution and Sentinel-2 Deep Resolution 3 (S2DR3) data with 1 m resolution—to assess their (i) relationship with yield in organically grown barley and (ii) utility for winter crop mapping. Vegetation indices were generated and analysed across different phenological phases to determine the most suitable predictors of yield. The results indicate that using 10 × 10 m data, the BBCH-41 phase is optimal for yield prediction, with the Green Chlorophyll Vegetation Index (GCVI; r = 0.80) showing the strongest correlation with yield. In contrast, S2DR3 data with a 1 × 1 m resolution demonstrated that Transformed the Chlorophyll Absorption in Reflectance Index (TCARI), TO, and Normalised Difference Red Edge Index (NDRE1) were consistently reliable across all phenological stages, except for BBCH-51, which showed weak correlations. These findings highlight the potential of remote sensing in organic barley farming and emphasise the importance of selecting appropriate data resolutions and vegetation indices for accurate yield prediction. With the use of three-date spectral band stacks, the Random Forest (RF) and Support Vector Classification (SVC) methods were used to differentiate between wheat, barley, and rapeseed. A five-fold cross-validation approach was applied, training data were stratified with 200 points per crop, and classification accuracy was assessed using the User’s and Producer’s accuracy metrics through pixel-by-pixel comparison with a reference raster. The results for S2 and S2DR3 were very similar to each other, confirming the significant potential of S2DR3 for high-resolution crop mapping. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) The overall location of the study area in Bulgaria. The image data used in this study correspond to Sentinel-2 tile T35TLG shown as a red square. (<b>B</b>) Sentinel-2 Deep Resolution 3.0 imagery in natural colours from 30 April 2023 obtained over tile T35TLG. The red rectangle and dot indicate the location of the area used for crop classification and the field used for collecting crop yield data, respectively. Closer looks at these two sites are shown in (<b>C</b>,<b>D</b>), respectively. (<b>E</b>) Subsets of Sentinel-2 Deep Resolution 3.0 (top, 1 m spatial resolution) and Sentinel-2 (10 m spatial resolution) obtained over an arbitrary agricultural area (location shown with arrow on (<b>C</b>); 30 April 2023; natural colours).</p>
Full article ">Figure 2
<p>Correlation between yield of organic barley from S2DR3 data (blue) and S2 data (orange) in phase BBCH-41.</p>
Full article ">Figure 3
<p>Correlation between yield of organic barley from S2DR3 data (blue) and S2 data (orange) in phase BBCH-51.</p>
Full article ">Figure 4
<p>Correlation between yield of organic barley from S2DR3 data (blue) and S2 data (orange) in phase BBCH-77.</p>
Full article ">Figure 5
<p>Maps showing the distribution of the three winter crops according to the SVC classifications of the Sentinel-2 (S2; left) and Sentinel-2 Deep Resolution 3.0 (S2DR3; middle) multitemporal datasets and the reference vector data (IACS/GSA; right). The areas indicated with “A” and “B” illustrate two typical error patterns (see text for more details). The maps cover Site 2 (see <a href="#remotesensing-17-00957-f001" class="html-fig">Figure 1</a>C for its location).</p>
Full article ">Figure 6
<p>Spectral profiles of the three winter crops for the two image types, S2DR3 and S2, and the three dates. Dots represent the mean and error bars represent the standard deviation of 50 randomly selected pixels for each crop. Pixels were sampled at image native resolution, i.e., 1 m for S2DR3 and 10 m/20 m for S2. Sampling locations were the same for S2DR3 and S2.</p>
Full article ">
28 pages, 28459 KiB  
Article
Multi-Temporal Remote Sensing Satellite Data Analysis for the 2023 Devastating Flood in Derna, Northern Libya
by Roman Shults, Ashraf Farahat, Muhammad Usman and Md Masudur Rahman
Remote Sens. 2025, 17(4), 616; https://doi.org/10.3390/rs17040616 - 11 Feb 2025
Viewed by 503
Abstract
Floods are considered to be among the most dangerous and destructive geohazards, leading to human victims and severe economic outcomes. Yearly, many regions around the world suffer from devasting floods. The estimation of flood aftermaths is one of the high priorities for the [...] Read more.
Floods are considered to be among the most dangerous and destructive geohazards, leading to human victims and severe economic outcomes. Yearly, many regions around the world suffer from devasting floods. The estimation of flood aftermaths is one of the high priorities for the global community. One such flood took place in northern Libya in September 2023. The presented study is aimed at evaluating the flood aftermath for Derna city, Libya, using high resolution GEOEYE-1 and Sentinel-2 satellite imagery in Google Earth Engine environment. The primary task is obtaining and analyzing data that provide high accuracy and detail for the study region. The main objective of study is to explore the capabilities of different algorithms and remote sensing datasets for quantitative change estimation after the flood. Different supervised classification methods were examined, including random forest, support vector machine, naïve-Bayes, and classification and regression tree (CART). The various sets of hyperparameters for classification were considered. The high-resolution GEOEYE-1 images were used for precise change detection using image differencing (pixel-to-pixel comparison and geographic object-based image analysis (GEOBIA) for extracting building), whereas Sentinel-2 data were employed for the classification and further change detection by classified images. Object based image analysis (OBIA) was also performed for the extraction of building footprints using very high resolution GEOEYE images for the quantification of buildings that collapsed due to the flood. The first stage of the study was the development of a workflow for data analysis. This workflow includes three parallel processes of data analysis. High-resolution GEOEYE-1 images of Derna city were investigated for change detection algorithms. In addition, different indices (normalized difference vegetation index (NDVI), soil adjusted vegetation index (SAVI), transformed NDVI (TNDVI), and normalized difference moisture index (NDMI)) were calculated to facilitate the recognition of damaged regions. In the final stage, the analysis results were fused to obtain the damage estimation for the studied region. As the main output, the area changes for the primary classes and the maps that portray these changes were obtained. The recommendations for data usage and further processing in Google Earth Engine were developed. Full article
(This article belongs to the Special Issue Image Processing from Aerial and Satellite Imagery)
Show Figures

Figure 1

Figure 1
<p>Map of Libya with the location of Derna (P.C. BBC News after the Flooding).</p>
Full article ">Figure 2
<p>Pre-flood (1 July 2023) and post-flood (13 September 2023) images show damage caused by the collapse of the Derna Dam.</p>
Full article ">Figure 3
<p>Change detection analysis flowchart.</p>
Full article ">Figure 4
<p>Image differencing for Derna region using (<b>a</b>) band 1, (<b>b</b>) band 2, and (<b>c</b>) band 3.</p>
Full article ">Figure 5
<p>Area changes: (<b>a</b>) band 1, (<b>b</b>) band 2, and (<b>c</b>) band 3.</p>
Full article ">Figure 6
<p>Spectral indices and their changes for the Derna area using NDVI (<b>a</b>,<b>b</b>), SAVI (<b>c</b>,<b>d</b>), TNDVI (<b>e</b>,<b>f</b>), and NDMI (<b>g</b>,<b>h</b>).</p>
Full article ">Figure 6 Cont.
<p>Spectral indices and their changes for the Derna area using NDVI (<b>a</b>,<b>b</b>), SAVI (<b>c</b>,<b>d</b>), TNDVI (<b>e</b>,<b>f</b>), and NDMI (<b>g</b>,<b>h</b>).</p>
Full article ">Figure 7
<p>(<b>a</b>) Derna Region Random Forest Classification results for image dated (<b>a</b>) 18 August 2023 and (<b>b</b>) Derna Region Random Forest Classification dated 22 September 2023.</p>
Full article ">Figure 8
<p>Derna Region CART Classification results for image dated (<b>a</b>) 18 August 2023 and (<b>b</b>) Derna Region Random Forest Classification dated 22 September 2023.</p>
Full article ">Figure 9
<p>Derna Region Naïve Bayes Classification for 18 August 2023 (<b>a</b>) and 22 September 2023 (<b>b</b>).</p>
Full article ">Figure 10
<p>SVM hyperparameters and classification ways.</p>
Full article ">Figure 11
<p>Derna Region SVM Classification for 18 August 2023 (<b>a</b>) and 22 September 2023 (<b>b</b>).</p>
Full article ">Figure 12
<p>Derna Region SVM Classification, the polynomial kernel for 18 August 2023 (<b>a</b>) and 22 September 2023 (<b>b</b>).</p>
Full article ">Figure 13
<p>Building footprint extracted using GEOBIA and building damaged due to flash.</p>
Full article ">
27 pages, 4799 KiB  
Article
Deep Learning-Based Land Cover Extraction from Very-High-Resolution Satellite Imagery for Assisting Large-Scale Topographic Map Production
by Yofri Furqani Hakim and Fuan Tsai
Remote Sens. 2025, 17(3), 473; https://doi.org/10.3390/rs17030473 - 30 Jan 2025
Viewed by 495
Abstract
The demand for large-scale topographic maps in Indonesia has significantly increased due to the implementation of several government initiatives that necessitate the utilization of spatial data in development planning. Currently, the national production capacity for large-scale topographic maps in Indonesia is 13,000 km [...] Read more.
The demand for large-scale topographic maps in Indonesia has significantly increased due to the implementation of several government initiatives that necessitate the utilization of spatial data in development planning. Currently, the national production capacity for large-scale topographic maps in Indonesia is 13,000 km2/year using stereo-plotting/mono-plotting methods from photogrammetric data, Lidar, high-resolution satellite imagery, or a combination of the three. In order to provide the necessary data to the respective applications in a timely manner, one strategy is to only generate critical layers of the maps. One of the topographic map layers that is often needed is land cover. This research focuses on providing land cover to support the accelerated provision of topographic maps. The data used are very-high-resolution satellite images. The method used is a deep learning approach to classify very-high-resolution satellite images into land cover data. The implementation of the deep learning approach can advance the production of topographic maps, particularly in the provision of land cover data. This significantly enhances the efficiency and effectiveness of producing large-scale topographic maps, hence increasing productivity. The quality assessment of this study demonstrates that the AI-assisted method is capable of accurately classifying land cover data from very-high-resolution images, as indicated by the Kappa values of 0.81 and overall accuracy of 86%, respectively. Full article
(This article belongs to the Special Issue Advances in Deep Learning Approaches in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The coverage of the topographic map with a scale of 1:5000. It covers approximately 3% of the whole area of Indonesia (source: Geospatial Information Agency, Indonesia).</p>
Full article ">Figure 2
<p>Forest area that covered 49.7% of Indonesia’s teritory (source: Ministry of Environment and Forestry and Geospatial Information Agency, Indonesia).</p>
Full article ">Figure 3
<p>The case study area is located in Mataram city, which is in the Province of Nusatenggara Barat, Indonesia.</p>
Full article ">Figure 4
<p>The flowchart of the research framework.</p>
Full article ">Figure 5
<p>Example of the pan-sharpening process. A 2 m resolution of multispectral bands fused with 0.5 m resolution of panchromatic band generates 0.5 m resolution of pan-sharpening bands.</p>
Full article ">Figure 6
<p>The U-net architecture is demonstrated here with a resolution of 32 × 32 pixels at its lowest level. A multi-channel feature map is represented by each blue box. The number of channels is denoted at the top of the unit. The box’s dimensions are denoted in the lower left quadrant. The feature maps that are duplicated are represented by the white box. A variety of operations are denoted by the arrows.</p>
Full article ">Figure 7
<p>The designated test location within Mataram city is denoted by the red box. The area measures 9.18 km by 6.90 km.</p>
Full article ">Figure 8
<p>Illustrations of semi-variance diagrams for “building” in sample 1 and sample 2.</p>
Full article ">Figure 9
<p>Applied to the original image (<b>a</b>); images (<b>b</b>–<b>f</b>) reflect the five GLCM formula results.</p>
Full article ">Figure 10
<p>The examples of the first and second PCA bands.</p>
Full article ">Figure 11
<p>The relationship between the F1-score and the number of epochs.</p>
Full article ">Figure 12
<p>An illustration of morphological post-classification processing.</p>
Full article ">Figure 13
<p>An example of the final result of deep learning-based land cover classification presented using Universal Transverse Mercator (UTM) Map Projection Zone-50S at a scale of 1:5000.</p>
Full article ">
20 pages, 9743 KiB  
Article
UAV-Based Survey of the Earth Pyramids at the Kuklica Geosite (North Macedonia)
by Ivica Milevski, Bojana Aleksova and Slavoljub Dragićević
Heritage 2025, 8(1), 6; https://doi.org/10.3390/heritage8010006 - 26 Dec 2024
Viewed by 1052
Abstract
This paper presents methods for a UAV-based survey of the site “Kuklica” near Kratovo, North Macedonia. Kuklica is a rare natural complex with earth pyramids, and because of its exceptional scientific, educational, touristic, and cultural significance, it was proclaimed to be a Natural [...] Read more.
This paper presents methods for a UAV-based survey of the site “Kuklica” near Kratovo, North Macedonia. Kuklica is a rare natural complex with earth pyramids, and because of its exceptional scientific, educational, touristic, and cultural significance, it was proclaimed to be a Natural Monument in 2008. However, after the proclamation, the interest in visiting this site and the threats in terms of its potential degradation rapidly grew, increasing the need for a detailed survey of the site and monitoring. Given the site’s small size (0.5 km2), the freely available satellite images and digital elevation models are not suitable for comprehensive analysis and monitoring of the site, especially in terms of the individual forms within the site. Instead, new tools are increasingly being used for such tasks, including UAVs (unmanned aerial vehicles) and LiDAR (Light Detection and Ranging). Since professional LiDAR is very expensive and still not readily available, we used a low-cost UAV (DJI Mini 4 Pro) to carry out a detailed survey. First, the flight path, the altitude of the UAV, the camera angle, and the photo recording intervals were precisely planned and defined. Also, the ground markers (checkpoints) were carefully selected. Then, the photos taken by the drone were aligned and processed using Agisoft Metashape software (v. 2.1.4), producing a digital elevation model and orthophoto imagery with a very high (sub-decimeter) resolution. Following this procedure, more than 140 earth pyramids were delineated, ranging in height from 1 to 2 m and to 30 m at their highest. At this stage, a very accurate UAV-based 3D model of the most remarkable earth pyramids was developed (the accuracy was checked using the iPhone 14 Pro LiDAR module), and their morphometrical properties were calculated. Also, the site’s erosion rate and flash flood potential were calculated, showing high susceptibility to both. The final goal was to monitor the changes and to minimize the degradation of the unique landscape, thus better protecting the geosite and its value. Full article
(This article belongs to the Section Geoheritage and Geo-Conservation)
Show Figures

Figure 1

Figure 1
<p>Location of the NM Kuklica site in North Macedonia (<b>left</b>) and its catchment area (<b>right</b>).</p>
Full article ">Figure 2
<p>The east (<b>left</b>) and the west (<b>right</b>) side of the site of the earth pyramids in the NM Kuklica site.</p>
Full article ">Figure 3
<p>Test polygons as the input for the machine learning classification (ANN).</p>
Full article ">Figure 4
<p>Machine learning ANN classification of land cover in Kuklica area.</p>
Full article ">Figure 5
<p>Identified and delineated earth pyramids in the Agisoft Metashape 3D model.</p>
Full article ">Figure 6
<p>Inventory of the remarkable earth pyramids in NM Kuklica. Morphometry is calculated using the 0.1 m resolution UAV-based DEM.</p>
Full article ">Figure 7
<p>Erosion susceptibility map (<b>A</b>) and mean annual erosion rate (<b>B</b>) of the Kuklica catchment area and corresponding maps of the NM Kuklica site (<b>C</b>,<b>D</b>).</p>
Full article ">Figure 8
<p>Flash Flood Potential Index in regard to the Kuklica catchment (<b>A</b>,<b>B</b>) and NM Kuklica site (<b>C</b>).</p>
Full article ">Figure 9
<p>Careful inspection and comparison of earth pyramid photos from the first research carried out in 1995 and the last visit in 2024, which shows discrete morphological changes.</p>
Full article ">
25 pages, 9000 KiB  
Article
Five-Year Evaluation of Sentinel-2 Cloud-Free Mosaic Generation Under Varied Cloud Cover Conditions in Hawai’i
by Francisco Rodríguez-Puerta, Ryan L. Perroy, Carlos Barrera, Jonathan P. Price and Borja García-Pascual
Remote Sens. 2024, 16(24), 4791; https://doi.org/10.3390/rs16244791 - 22 Dec 2024
Viewed by 1339
Abstract
The generation of cloud-free satellite mosaics is essential for a range of remote sensing applications, including land use mapping, ecosystem monitoring, and resource management. This study focuses on remote sensing across the climatic diversity of Hawai’i Island, which encompasses ten Köppen climate zones [...] Read more.
The generation of cloud-free satellite mosaics is essential for a range of remote sensing applications, including land use mapping, ecosystem monitoring, and resource management. This study focuses on remote sensing across the climatic diversity of Hawai’i Island, which encompasses ten Köppen climate zones from tropical to Arctic: periglacial. This diversity presents unique challenges for cloud-free image generation. We conducted a comparative analysis of three cloud-masking methods: two Google Earth Engine algorithms (CloudScore+ and s2cloudless) and a new proprietary deep learning-based algorithm (L3) applied to Sentinel-2 imagery. These methods were evaluated against the best monthly composite selected from high-frequency Planet imagery, which acquires daily images. All Sentinel-2 bands were enhanced to a 10 m resolution, and an advanced weather mask was applied to generate monthly mosaics from 2019 to 2023. We stratified the analysis by cloud cover frequency (low, moderate, high, and very high), applying one-way and two-way ANOVAs to assess cloud-free pixel success rates. Results indicate that CloudScore+ achieved the highest success rate at 89.4% cloud-free pixels, followed by L3 and s2cloudless at 79.3% and 80.8%, respectively. Cloud removal effectiveness decreased as cloud cover increased, with clear pixel success rates ranging from 94.6% under low cloud cover to 79.3% under very high cloud cover. Additionally, seasonality effects showed higher cloud removal rates in the wet season (88.6%), while no significant year-to-year differences were observed from 2019 to 2023. This study advances current methodologies for generating reliable cloud-free mosaics in tropical and subtropical regions, with potential applications for remote sensing in other cloud-dense environments. Full article
(This article belongs to the Special Issue Advances in Deep Learning Approaches in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Island of Hawai’i: (<b>a</b>) Köppen climate map adapted from Mauna Loa Observatory Report [<a href="#B40-remotesensing-16-04791" class="html-bibr">40</a>] and (<b>b</b>) the annual mean cloud cover percentage based on the Hawai’i Climate Atlas [<a href="#B39-remotesensing-16-04791" class="html-bibr">39</a>].</p>
Full article ">Figure 2
<p>Cloud weight computation for a single Sentinel-2 image, showing different spatial resolutions used during the process.</p>
Full article ">Figure 3
<p>Cloud cover stratification across the island of Hawai’i, derived from data provided by the Hawaiian Climate Atlas [<a href="#B39-remotesensing-16-04791" class="html-bibr">39</a>]. The black squares represent the locations of the evaluated blocks (a total of 240 blocks, each covering 100 hectares) within each cloud cover stratum (60 blocks allocated to each cloud cover stratum). Yellow stars indicate the spatial distribution of the sample blocks presented in Results.</p>
Full article ">Figure 4
<p>Example of a visual inspection of a moderate cloud cover block (ID = 13,020): Panel (<b>A</b>) shows mosaics of the evaluated masks: (<b>1</b>) CloudScore+, (<b>2</b>) s2cloudness, (<b>3</b>) L3, and (<b>4</b>) Planet reference image for comparison. Panel (<b>B</b>) shows the results after visual inspection, with “clear” pixels colored purple, “cloudy” pixels colored orange, and “no data” pixels colored gray.</p>
Full article ">Figure 5
<p>Visual examples of cloud detection results under varying cloud cover conditions (low, moderate, high, and very high) for CloudScore+, s2cloudless, and L3 algorithms. This figure is divided into four quadrants, each representing a specific cloud cover level. Within each quadrant, individual 1 × 1 km blocks are ordered from left to right according to the accuracy quartiles of the L3 mask, with the leftmost column showing the highest accuracy quartile (Q1) and the rightmost column representing the lowest accuracy quartile (Q4). Rows correspond to the evaluated masks (top to bottom: CloudScore+, s2cloudless, L3) and PlanetScope imagery as ground truth. Errors (clouds) are displayed as white areas, while “no data” pixels appear as large gray grid patterns. The spatial locations of these blocks are indicated in <a href="#remotesensing-16-04791-f003" class="html-fig">Figure 3</a> by yellow stars.</p>
Full article ">Figure 6
<p>Box plots showing percentages of success (“clear”) in purple, errors (“cloudy”) in orange, and “no data” in gray for the one-way factors analyzed: (<b>top left</b>)—mask, (<b>top right</b>)—season, (<b>bottom left</b>)—year, and <b>(bottom right</b>)—cloud cover.</p>
Full article ">Figure 7
<p>Box plots showing the percentages of success (“clear”) in purple, errors (“cloudy”) in orange, and “no data” in gray for each mask and cloud cover level combination (low, moderate, high, and very high).</p>
Full article ">Figure 8
<p>Box plots showing the percentages of success (“clear”) in purple, errors (“cloudy”) in orange, and “no data” in gray for each combination of mask, year, and season.</p>
Full article ">Figure 9
<p>Pairwise comparison of success rates (“clear” pixels) across different levels of cloud cover and mask algorithms, using Tukey’s HSD Test. From top to bottom, the cloud cover levels are ordered as low, moderate, high, and very high. Statistically significant differences are marked with an asterisk, and the superior mask (indicated by a higher percentage of “clear” pixels) is underlined. Abbreviations: s2cl = s2cloudless mask, CS = CloudScore+.</p>
Full article ">Figure 10
<p>Pairwise comparison of error rates (“cloudy” pixels) across different levels of cloud cover and mask algorithms, using Tukey’s HSD Test. From top to bottom, cloud cover levels are ordered as low, moderate, high, and very high. Statistically significant differences are marked with an asterisk, and the superior mask (indicated by a lower percentage of “cloudy” pixels) is underlined. Abbreviations: s2cl = s2cloudless mask, CS = CloudScore+.</p>
Full article ">Figure 11
<p>Pairwise comparison of “no data” percentage across different levels of cloud cover and mask algorithms, using Tukey’s HSD Test. From top to bottom, cloud cover levels are ordered as low, moderate, high, and very high. Statistically significant differences are marked with an asterisk, and the mask with fewer “no data” pixels is underlined. Abbreviations: s2cl = s2cloudless mask, CS = CloudScore+.</p>
Full article ">
21 pages, 10857 KiB  
Article
Application of PlanetScope Imagery for Flood Mapping: A Case Study in South Chickamauga Creek, Chattanooga, Tennessee
by Mithu Chanda and A. K. M. Azad Hossain
Remote Sens. 2024, 16(23), 4437; https://doi.org/10.3390/rs16234437 - 27 Nov 2024
Cited by 1 | Viewed by 1226
Abstract
Floods stand out as one of the most expensive natural calamities, causing harm to both lives and properties for millions of people globally. The increasing frequency and intensity of flooding underscores the need for accurate and timely flood mapping methodologies to enhance disaster [...] Read more.
Floods stand out as one of the most expensive natural calamities, causing harm to both lives and properties for millions of people globally. The increasing frequency and intensity of flooding underscores the need for accurate and timely flood mapping methodologies to enhance disaster preparedness and response. Earth observation data obtained through satellites offer comprehensive and recurring perspectives of areas that may be prone to flooding. This paper shows the suitability of high-resolution PlanetScope imagery as an efficient and accessible approach for flood mapping through a case study in South Chickamauga Creek (SCC), Chattanooga, Tennessee, focusing on a significant flooding event in 2020. The extent of the flood water was delineated and mapped using image classification and density slicing of Normalized Difference Water Index (NDWI). The obtained results indicate that PlanetScope imagery performed well in flood mapping for a narrow creek like SCC, achieving an overall accuracy of more than 90% and a Kappa coefficient of over 0.80. The findings of this research contribute to a better understanding of the flood event in Chattanooga and demonstrate that PlanetScope imagery can be utilized as a very useful resource for accurate and timely flood mapping of streams with narrow widths. Full article
(This article belongs to the Special Issue Remote Sensing of Floods: Progress, Challenges and Opportunities)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location map of the study site (referenced to Hamilton County, TN; Catoosa, and Walker County, GA) on an ESRI base map available on ArcGIS Pro software (version 3.1.3). The red color on the lower left side of the image indicates the study area and the right image presents the boundary of South Chickamauga Creek for this study.</p>
Full article ">Figure 2
<p>Schematic workflow of flood mapping using PlanetScope imagery.</p>
Full article ">Figure 3
<p>(<b>a</b>) Landsat 8 OLI and (<b>b</b>) PlanetScope satellite images of pre-flood conditions of the study site and a detailed pixel image of the creek width of SCC. Near-infrared bands were used to make maps for both Landsat and PlanetScope imagery.</p>
Full article ">Figure 4
<p>PlanetScope satellite images of pre- and post-flood conditions of the study site, respectively. (<b>a</b>,<b>b</b>): True color image display; (<b>c</b>,<b>d</b>): False color composite display (Green, Red, and NIR bands).</p>
Full article ">Figure 5
<p>The red color indicates the random pixel for accuracy assessment of (<b>a</b>) pre- flood (<b>b</b>) post-flood conditions. The background images are shown in true color.</p>
Full article ">Figure 6
<p>NDWI classified thematic maps of (<b>a</b>) pre- and (<b>b</b>) post-flood conditions of the study site.</p>
Full article ">Figure 7
<p>Unsupervised classified thematic maps of (<b>a</b>) pre- and (<b>b</b>) post-flood conditions of the study site.</p>
Full article ">Figure 8
<p>Major flood-affected areas of SCC using (<b>i</b>,<b>iii</b>) Density slicing of NDWI image and (<b>ii</b>,<b>iv</b>) unsupervised classification.</p>
Full article ">Figure 9
<p>Total water-covered areas of study site in pre-flood and post-flood conditions.</p>
Full article ">
24 pages, 6941 KiB  
Article
Discriminating Seagrasses from Green Macroalgae in European Intertidal Areas Using High-Resolution Multispectral Drone Imagery
by Simon Oiry, Bede Ffinian Rowe Davies, Ana I. Sousa, Philippe Rosa, Maria Laura Zoffoli, Guillaume Brunier, Pierre Gernez and Laurent Barillé
Remote Sens. 2024, 16(23), 4383; https://doi.org/10.3390/rs16234383 - 23 Nov 2024
Viewed by 1337
Abstract
Coastal areas support seagrass meadows, which offer crucial ecosystem services, including erosion control and carbon sequestration. However, these areas are increasingly impacted by human activities, leading to habitat fragmentation and seagrass decline. In situ surveys, traditionally performed to monitor these ecosystems, face limitations [...] Read more.
Coastal areas support seagrass meadows, which offer crucial ecosystem services, including erosion control and carbon sequestration. However, these areas are increasingly impacted by human activities, leading to habitat fragmentation and seagrass decline. In situ surveys, traditionally performed to monitor these ecosystems, face limitations on temporal and spatial coverage, particularly in intertidal zones, prompting the addition of satellite data within monitoring programs. Yet, satellite remote sensing can be limited by too coarse spatial and/or spectral resolutions, making it difficult to discriminate seagrass from other macrophytes in highly heterogeneous meadows. Drone (unmanned aerial vehicle—UAV) images at a very high spatial resolution offer a promising solution to address challenges related to spatial heterogeneity and the intrapixel mixture. This study focuses on using drone acquisitions with a ten spectral band sensor similar to that onboard Sentinel-2 for mapping intertidal macrophytes at low tide (i.e., during a period of emersion) and effectively discriminating between seagrass and green macroalgae. Nine drone flights were conducted at two different altitudes (12 m and 120 m) across heterogeneous intertidal European habitats in France and Portugal, providing multispectral reflectance observation at very high spatial resolution (8 mm and 80 mm, respectively). Taking advantage of their extremely high spatial resolution, the low altitude flights were used to train a Neural Network classifier to discriminate five taxonomic classes of intertidal vegetation: Magnoliopsida (Seagrass), Chlorophyceae (Green macroalgae), Phaeophyceae (Brown algae), Rhodophyceae (Red macroalgae), and benthic Bacillariophyceae (Benthic diatoms), and validated using concomitant field measurements. Classification of drone imagery resulted in an overall accuracy of 94% across all sites and images, covering a total area of 467,000 m2. The model exhibited an accuracy of 96.4% in identifying seagrass. In particular, seagrass and green algae can be discriminated. The very high spatial resolution of the drone data made it possible to assess the influence of spatial resolution on the classification outputs, showing a limited loss in seagrass detection up to about 10 m. Altogether, our findings suggest that the MultiSpectral Instrument (MSI) onboard Sentinel-2 offers a relevant trade-off between its spatial and spectral resolution, thus offering promising perspectives for satellite remote sensing of intertidal biodiversity over larger scales. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Location of drone flights in France and Portugal. (<b>A</b>) Gulf of Morbihan (Two sites), (<b>B</b>) Bourgneuf Bay (Two sites), and (<b>C</b>) Ria de Aveiro Coastal Lagoon (Three sites). The golden areas represent the intertidal zone.</p>
Full article ">Figure 2
<p>The five taxonomic classes of vegetation used to train the Neural Network model and an example of their raw spectral signatures at the spectral resolution of the Micasense RedEdge Dual MX. (<b>A</b>): Magnoliopsida (<span class="html-italic">Zostera noltei</span>); (<b>B</b>): Phaeophyceae (<span class="html-italic">Fucus</span> sp.); (<b>C</b>): Rhodophyceae (<span class="html-italic">Gracilaria vermiculophylla</span>); (<b>D</b>): Chlorophyceae (<span class="html-italic">Ulva</span> sp.); (<b>E</b>): Bacillariophyceae (Benthic diatoms). (<b>F</b>): Spectral signature of each vegetation class. Classes and species taxonomy following the WORMS—World Register of Marine Species classification.</p>
Full article ">Figure 3
<p>Schematic representation of the workflow. Parallelograms represent input or output data, and rectangles represent Python processing algorithms. The overall workflow of this study is divided into two distinct parts based on the spatial resolution of the drone flights: high-resolution flights (pixel size: 8 mm) were utilized for training and prediction of the Neural Network model, whereas lower-resolution flights (pixel size: 80 mm) were solely employed for prediction and validation purposes. Validation has been performed on both high- and low-resolution flights.</p>
Full article ">Figure 4
<p>Comparison of reflectance retrieved from both low-altitude and high-altitude flights over a common area. The black dashed line represents a 1 to 1 relationship. The left (<b>A</b>) plots raw data, and the right (<b>B</b>) plots standardized data (Equation (1)).</p>
Full article ">Figure 5
<p>RGB ortho-mosaic (<b>Left</b>) and Prediction (<b>Right</b>) of the low altitude flight of Gafanha, Portugal. The total extent of this flight was 3000 m<sup>2</sup> with a resolution of 8 mm per pixel. The zoom covers an area equivalent to a 10 m Sentinel-2 pixel size.</p>
Full article ">Figure 6
<p>RGB ortho-mosaic (<b>Left</b>) and Prediction (<b>Right</b>) of the high-altitude flight of Gafanha, Portugal. The total extent of this flight was about 1 km<sup>2</sup> with a resolution of 80 mm per pixel. The yellow outline shows the extent of Gafanha’s low-altitude flight, as presented in <a href="#remotesensing-16-04383-f005" class="html-fig">Figure 5</a>. The zoom covers an area equivalent to a 10 m Sentinel-2 pixel size.</p>
Full article ">Figure 7
<p>RGB ortho-mosaic (<b>Top</b>) and Prediction (<b>Bottom</b>) of the flight made in the inner part of Ria de Aveiro coastal lagoon, Portugal. The total extent of this flight was about 1.5 km<sup>2</sup> with a resolution of 80 mm per pixel. The zoom inserts cover an area equivalent to the size of a 10 m Sentinel-2 pixel.</p>
Full article ">Figure 8
<p>RGB ortho-mosaic (<b>Top</b>) and Prediction (<b>Bottom</b>) of L’Epine, France. The total extent of this flight was about 28,000 m<sup>2</sup> with a resolution of 80 mm per pixel. The zoom covers an area equivalent to a 10 m Sentinel-2 pixel size.</p>
Full article ">Figure 9
<p>A global confusion matrix on the left is derived from validation data across each flight, while a mosaic of confusion matrices from individual flights is presented on the right. The labels inside the matrices indicate the balanced accuracy for each class. The labels at the bottom of the global matrix indicate the User’s accuracy for each class, and those on the right indicate the Producer’s Accuracy. The values adjacent to the names of each site represent the proportion of total pixels from that site contributing to the overall matrix. Grey lines within the mosaic indicate the absence of validation data for the class at that site. The table at the bottom summarizes the Sensitivity, Specificity, and Accuracy for each class and for the overall model.</p>
Full article ">Figure 10
<p>Variable Importance of the Neural Network Classifier for each taxonomic class. The longer the slice, the more important the variable for prediction of each class. The right plot shows the drone raw and standardized reflectance spectra of each class. Each slice represents the Variable Importance (VI) of both raw and standardized reflectance combined.</p>
Full article ">Figure 11
<p>Predicted area loss for different vegetation types (green algae, seagrass, brown algae, and red algae) as a function of spatial resolution. The lines represent Generalized Linear Model (GLM) predictions, and shaded areas indicate standard errors. As the resolution decreases, predicted area loss increases for all vegetation types, with green algae showing the highest loss and seagrass the smallest at coarser resolutions.</p>
Full article ">Figure 12
<p>Kernel density plot showing the proportion of pixels well classified based on the percent cover of the class in high-altitude flight pixels of Gafanha, Portugal. Each subplot shows all the pixels of the same classes on the high-altitude flight. The cover (%) of classes was retrieved using the result of the classification of the low-altitude flight in Gafanha, Portugal.</p>
Full article ">Figure 13
<p>Photosynthetic and carotenoid pigments present (Green) or absent (Red) in each taxonomic class present in the Neural Network Classifier, along with their absorption wavelength measured with spectroradiometer, Chl-b—chlorophyll-b, Chl-c—chlorophyll-c, Fuco—fucoxanthin, Zea—zeaxanthin, Diad—diadinoxanthin, Lut—lutein, Neo—neoxanthin, PE—phycoerythrin, PC—phycocyanin; [<a href="#B25-remotesensing-16-04383" class="html-bibr">25</a>,<a href="#B26-remotesensing-16-04383" class="html-bibr">26</a>,<a href="#B54-remotesensing-16-04383" class="html-bibr">54</a>,<a href="#B55-remotesensing-16-04383" class="html-bibr">55</a>,<a href="#B56-remotesensing-16-04383" class="html-bibr">56</a>].</p>
Full article ">Figure 14
<p>Sample of <a href="#remotesensing-16-04383-f009" class="html-fig">Figure 9</a> focusing on green macrophytes. The labels inside the matrix indicate the number of pixels.</p>
Full article ">
18 pages, 16650 KiB  
Article
Mapping Seagrass Distribution and Abundance: Comparing Areal Cover and Biomass Estimates Between Space-Based and Airborne Imagery
by Victoria J. Hill, Richard C. Zimmerman, Dorothy A. Byron and Kenneth L. Heck
Remote Sens. 2024, 16(23), 4351; https://doi.org/10.3390/rs16234351 - 21 Nov 2024
Viewed by 980
Abstract
This study evaluated the effectiveness of Planet satellite imagery in mapping seagrass coverage in Santa Rosa Sound, Florida. We compared very-high-resolution aerial imagery (0.3 m) collected in September 2022 with high-resolution Planet imagery (~3 m) captured during the same period. Using supervised classification [...] Read more.
This study evaluated the effectiveness of Planet satellite imagery in mapping seagrass coverage in Santa Rosa Sound, Florida. We compared very-high-resolution aerial imagery (0.3 m) collected in September 2022 with high-resolution Planet imagery (~3 m) captured during the same period. Using supervised classification techniques, we accurately identified expansive, continuous seagrass meadows in the satellite images, successfully classifying 95.5% of the 11.18 km2 of seagrass area delineated manually from the aerial imagery. Our analysis utilized an occurrence frequency (OF) product, which was generated by processing ten clear-sky images collected between 8 and 25 September 2022 to determine the frequency with which each pixel was classified as seagrass. Seagrass patches encompassing at least nine pixels (~200 m2) were almost always detected by our classification algorithm. Using an OF threshold equal to or greater than >60% provided a high level of confidence in seagrass presence while effectively reducing the impact of small misclassifications, often of individual pixels, that appeared sporadically in individual images. The image-to-image uncertainty in seagrass retrieval from the satellite images was 0.1 km2 or 2.3%, reflecting the robustness of our classification method and allowing confidence in the accuracy of the seagrass area estimate. The satellite-retrieved leaf area index (LAI) was consistent with previous in situ measurements, leading to the estimate that 2700 tons of carbon per year are produced by the Santa Rosa Sound seagrass ecosystem, equivalent to a drawdown of approximately 10,070 tons of CO2. This satellite-based approach offers a cost-effective, semi-automated, and scalable method of assessing the distribution and abundance of submerged aquatic vegetation that provides numerous ecosystem services. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>). The location of Pensacola Bay is indicated by the red box. (<b>B</b>). The location of Santa Rosa Sound is indicated by the red outline. Underlying Ocean basemap from Esri.ArcGIS Pro 3.3.2. Sources: Esri.Arc, GEBCO, NOAA, National Geographic, DeLorme, HERE, Geonames.org, and other contributors.</p>
Full article ">Figure 2
<p>Flowchart outlining the processing steps for satellite and aerial imagery.</p>
Full article ">Figure 3
<p>Aerial imagery with areas identified as containing seagrass overlaid as polygons (red). The white-dashed box is the location of image overlap used in uncertainty estimates; solid white boxes numbered 1 through 5 are the locations of examples shown later figures. Green dots highlight the locations of East Sabine and Big Sabine Point, mentioned later in the text.</p>
Full article ">Figure 4
<p>Seagrass area polygons derived from aerial imagery (black lines), and Planet-identified seagrass using <span class="html-italic">OF</span> thresholds of ≥60% and ≥90% overlaid on aerial imagery. (<b>A</b>) Subset of aerial imagery highlighted as Box 3 in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>; (<b>B</b>) subset of aerial imagery highlighted as Box 5 in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Proportion of false-negative area by polygon size (with <span class="html-italic">OF</span> ≥ 60%) for aerial polygons where zero Planet pixels were identified as seagrass.</p>
Full article ">Figure 6
<p>Previous (black) seagrass areal extent for Santa Rosa Sound based on historic data [<a href="#B32-remotesensing-16-04351" class="html-bibr">32</a>,<a href="#B33-remotesensing-16-04351" class="html-bibr">33</a>] and 2022 estimate (red) derived from this analysis. Historical area generated by setting the average patchy density at 50% and summing continuous + patchy × 0.5 areas provided in the literature.</p>
Full article ">Figure 7
<p>(<b>A</b>) RGB representation of a subset of aerial imagery showing the large continuous seagrass meadow at Big Sabine Point in the middle of Santa Rosa Sound (see <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>, box 1). (<b>B</b>) Seagrass <span class="html-italic">OF</span> derived from all satellite images overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Mean leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated for <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">Figure 8
<p>(<b>A</b>) RGB representation of a subset of aerial imagery showing a seagrass meadow along the north shore of Santa Rosa Sound. The location of this site is shown in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>, box 2. (<b>B</b>) Planet-derived <span class="html-italic">OF</span> overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated for <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">Figure 9
<p>(<b>A</b>). Subset of RGB aerial images just west of Navarre Bridge; the location of this site is shown in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>, box 3. Seagrass meadows along the shore and in the middle of Santa Rosa Sound were obscured by suspended sediment plumes in the aerial image (<b>B</b>). Planet-derived <span class="html-italic">OF</span> overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated from <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">Figure 10
<p>(<b>A</b>) Subset of RGB aerial images showing an example of seagrass distribution over a shallow sand bank along the southern shore of Santa Rosa Sound. The location of this site is shown in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a> box 4; the white arrow points to shallow sand with small seagrass patches. (<b>B</b>). Planet-derived <span class="html-italic">OF</span> overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated for <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">
23 pages, 7255 KiB  
Article
Exploring the Relationship Between Very-High-Resolution Satellite Imagery Data and Fruit Count for Predicting Mango Yield at Multiple Scales
by Benjamin Adjah Torgbor, Priyakant Sinha, Muhammad Moshiur Rahman, Andrew Robson, James Brinkhoff and Luz Angelica Suarez
Remote Sens. 2024, 16(22), 4170; https://doi.org/10.3390/rs16224170 - 8 Nov 2024
Viewed by 1216
Abstract
Tree- and block-level prediction of mango yield is important for farm operations, but current manual methods are inefficient. Previous research has identified the accuracies of mango yield forecasting using very-high-resolution (VHR) satellite imagery and an ’18-tree’ stratified sampling method. However, this approach still [...] Read more.
Tree- and block-level prediction of mango yield is important for farm operations, but current manual methods are inefficient. Previous research has identified the accuracies of mango yield forecasting using very-high-resolution (VHR) satellite imagery and an ’18-tree’ stratified sampling method. However, this approach still requires infield sampling to calibrate canopy reflectance and the derived block-level algorithms are unable to translate to other orchards due to the influences of abiotic and biotic conditions. To better appreciate these influences, individual tree yields and corresponding canopy reflectance properties were collected from 2015 to 2021 for 1958 individual mango trees from 55 orchard blocks across 14 farms located in three mango growing regions of Australia. A linear regression analysis of the block-level data revealed the non-existence of a universal relationship between the 24 vegetation indices (VIs) derived from VHR satellite data and fruit count per tree, an outcome likely due to the influence of location, season, management and cultivar. The tree-level fruit count predicted using a random forest (RF) model trained on all calibration data produced a percentage root mean squared error (PRMSE) of 26.5% and a mean absolute error (MAE) of 48 fruits/tree. The lowest PRMSEs produced from RF-based models developed from location, season and cultivar subsets at the individual tree level ranged from 19.3% to 32.6%. At the block level, the PRMSE for the combined model was 10.1% and the lowest values for the location, seasonal and cultivar subset models varied between 7.2% and 10.0% upon validation. Generally, the block-level predictions outperformed the individual tree-level models. Maps were produced to provide mango growers with a visual representation of yield variability across orchards. This enables better identification and management of the influence of abiotic and biotic constraints on production. Future research could investigate the causes of spatial yield variability in mango orchards. Full article
Show Figures

Figure 1

Figure 1
<p>Location of mango farms in the three mango growing regions of Australia.</p>
Full article ">Figure 2
<p>Flowchart showing the sequence of procedure steps used in this study to generate the results.</p>
Full article ">Figure 3
<p>Example of 18 tree locations on the classified NDVI map (<b>a</b>) and on the ESRI basemap image (<b>b</b>). The points with L, M and H prefixes represent the different tree vigour classes of low, medium and high, respectively.</p>
Full article ">Figure 4
<p>Summary of fruits counted (<b>a</b>) per farm and (<b>b</b>) heterogeneity of cultivar yield distribution from 2015 to 2021. The numerical values and black dots associated with each boxplot represent the number of trees of that particular cultivar and outliers, respectively.</p>
Full article ">Figure 5
<p>Correlation between fruit count and the 24 VIs using the entire datasets of 1958 datapoints. The green and red colour ramps show the strength and direction of the correlation being positive and negative, respectively.</p>
Full article ">Figure 6
<p>Distribution of slopes for CIRE_1 with average slope and standard deviation.</p>
Full article ">Figure 7
<p>Relationships identified between RENDVI and fruit count: (<b>a</b>) and (<b>b</b>) were positive for 2016 and 2017, (<b>c</b>) negative for 2020 and (<b>d</b>) non-existent for 2021.</p>
Full article ">Figure 8
<p>RF prediction of fruit count using all individual tree datasets (combined model). The different coloured points represent the sampled trees from the respective farms and regions. n = 390 represents the number of datapoints (20%) used for model validation.</p>
Full article ">Figure 9
<p>RF-based location (region) prediction of fruit count in the (<b>a</b>) Northern Territory (NT), (<b>b</b>) Northern Queensland (N–QLD) and (<b>c</b>) South East Queensland (SE–QLD). The different coloured points represent the sampled trees on a given farm in the respective regions.</p>
Full article ">Figure 10
<p>RF-based variable importance plots for models from (<b>a</b>) combined datasets, (<b>b</b>) Northern Territory (NT), (<b>c</b>) Northern Queensland (N–QLD) and (<b>d</b>) South East Queensland (SE–QLD) and the best (<b>e</b>) seasonal and (<b>f</b>) cultivar models.</p>
Full article ">Figure 11
<p>Comparison of total actual and predicted yield for the 51 validation points (blocks per season) obtained from 29 unique blocks with available actual harvest data from 2016 to 2021.</p>
Full article ">Figure 12
<p>An example of a tree-level yield variability map derived from the RF-based combined model (<b>right</b>). The RGB image of the mango orchard mapped is shown on the (<b>left</b>). The legend presents an industry-based categorization of yield variability ranging from low (0–55) to high (139–170) for this study.</p>
Full article ">
20 pages, 10555 KiB  
Article
Cloud Detection Using a UNet3+ Model with a Hybrid Swin Transformer and EfficientNet (UNet3+STE) for Very-High-Resolution Satellite Imagery
by Jaewan Choi, Doochun Seo, Jinha Jung, Youkyung Han, Jaehong Oh and Changno Lee
Remote Sens. 2024, 16(20), 3880; https://doi.org/10.3390/rs16203880 - 18 Oct 2024
Viewed by 1287
Abstract
It is necessary to extract and recognize the cloud regions presented in imagery to generate satellite imagery as analysis-ready data (ARD). In this manuscript, we proposed a new deep learning model to detect cloud areas in very-high-resolution (VHR) satellite imagery by fusing two [...] Read more.
It is necessary to extract and recognize the cloud regions presented in imagery to generate satellite imagery as analysis-ready data (ARD). In this manuscript, we proposed a new deep learning model to detect cloud areas in very-high-resolution (VHR) satellite imagery by fusing two deep learning architectures. The proposed UNet3+ model with a hybrid Swin Transformer and EfficientNet (UNet3+STE) was based on the structure of UNet3+, with the encoder sequentially combining EfficientNet based on mobile inverted bottleneck convolution (MBConv) and the Swin Transformer. By sequentially utilizing convolutional neural networks (CNNs) and transformer layers, the proposed algorithm aimed to extract the local and global information of cloud regions effectively. In addition, the decoder used MBConv to restore the spatial information of the feature map extracted by the encoder and adopted the deep supervision strategy of UNet3+ to enhance the model’s performance. The proposed model was trained using the open dataset derived from KOMPSAT-3 and 3A satellite imagery and conducted a comparative evaluation with the state-of-the-art (SOTA) methods on fourteen test datasets at the product level. The experimental results confirmed that the proposed UNet3+STE model outperformed the SOTA methods and demonstrated the most stable precision, recall, and F1 score values with fewer parameters and lower complexity. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Examples of images contained in the training dataset: satellite images (<b>top</b>) and labeled reference data (<b>bottom</b>) (black: clear skies; red: thick and thin clouds; green: cloud shadows).</p>
Full article ">Figure 2
<p>Test datasets for evaluating the performance of deep learning models (black: clear skies; red: thick clouds; green: thin clouds; yellow: cloud shadows).</p>
Full article ">Figure 3
<p>Architecture of UNet3+.</p>
Full article ">Figure 4
<p>Architecture of the proposed UNet3+STE model (where <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">E</mi> <mo>=</mo> <mo>[</mo> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mn>4</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mn>5</mn> </mrow> </msub> <mo>]</mo> </mrow> </semantics></math> contains the feature map of each encoder stage and <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">D</mi> <mo>=</mo> <mo>[</mo> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>4</mn> </mrow> </msub> <mo>]</mo> </mrow> </semantics></math> includes the feature map of each decoder stage).</p>
Full article ">Figure 5
<p>Structure of the encoder part.</p>
Full article ">Figure 6
<p>Structures of the MBConvs in UNet3+STE.</p>
Full article ">Figure 7
<p>Structure of the Swin Transformer layer.</p>
Full article ">Figure 8
<p>Examples of structures for calculating <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> in the decoder part.</p>
Full article ">Figure 9
<p>Deep supervision structures in the decoder part.</p>
Full article ">Figure 10
<p>Precision, recall, and F1 scores for each class.</p>
Full article ">Figure 11
<p>Cloud detection results produced for high-spatial-resolution (<math display="inline"><semantics> <mrow> <mn>5965</mn> <mo>×</mo> <mn>6317</mn> </mrow> </semantics></math>) images at the product level (black: clear skies; red: thick and thin clouds; green: cloud shadows).</p>
Full article ">Figure 12
<p>First-subset images (<math display="inline"><semantics> <mrow> <mn>2000</mn> <mo>×</mo> <mn>2000</mn> </mrow> </semantics></math>) of the cloud detection results produced for high-spatial-resolution (<math display="inline"><semantics> <mrow> <mn>5965</mn> <mo>×</mo> <mn>5720</mn> </mrow> </semantics></math>) images at the product level.</p>
Full article ">Figure 13
<p>Second-subset images (<math display="inline"><semantics> <mrow> <mn>2000</mn> <mo>×</mo> <mn>2000</mn> </mrow> </semantics></math>) of the cloud detection results produced for high-spatial-resolution (<math display="inline"><semantics> <mrow> <mn>5965</mn> <mo>×</mo> <mn>5073</mn> </mrow> </semantics></math>) images at the product level.</p>
Full article ">
29 pages, 6780 KiB  
Article
Phenological and Biophysical Mediterranean Orchard Assessment Using Ground-Based Methods and Sentinel 2 Data
by Pierre Rouault, Dominique Courault, Guillaume Pouget, Fabrice Flamain, Papa-Khaly Diop, Véronique Desfonds, Claude Doussan, André Chanzy, Marta Debolini, Matthew McCabe and Raul Lopez-Lozano
Remote Sens. 2024, 16(18), 3393; https://doi.org/10.3390/rs16183393 - 12 Sep 2024
Cited by 2 | Viewed by 1479
Abstract
A range of remote sensing platforms provide high spatial and temporal resolution insights which are useful for monitoring vegetation growth. Very few studies have focused on fruit orchards, largely due to the inherent complexity of their structure. Fruit trees are mixed with inter-rows [...] Read more.
A range of remote sensing platforms provide high spatial and temporal resolution insights which are useful for monitoring vegetation growth. Very few studies have focused on fruit orchards, largely due to the inherent complexity of their structure. Fruit trees are mixed with inter-rows that can be grassed or non-grassed, and there are no standard protocols for ground measurements suitable for the range of crops. The assessment of biophysical variables (BVs) for fruit orchards from optical satellites remains a significant challenge. The objectives of this study are as follows: (1) to address the challenges of extracting and better interpreting biophysical variables from optical data by proposing new ground measurements protocols tailored to various orchards with differing inter-row management practices, (2) to quantify the impact of the inter-row at the Sentinel pixel scale, and (3) to evaluate the potential of Sentinel 2 data on BVs for orchard development monitoring and the detection of key phenological stages, such as the flowering and fruit set stages. Several orchards in two pedo-climatic zones in southeast France were monitored for three years: four apricot and nectarine orchards under different management systems and nine cherry orchards with differing tree densities and inter-row surfaces. We provide the first comparison of three established ground-based methods of assessing BVs in orchards: (1) hemispherical photographs, (2) a ceptometer, and (3) the Viticanopy smartphone app. The major phenological stages, from budburst to fruit growth, were also determined by in situ annotations on the same fields monitored using Viticanopy. In parallel, Sentinel 2 images from the two study sites were processed using a Biophysical Variable Neural Network (BVNET) model to extract the main BVs, including the leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR), and fraction of green vegetation cover (FCOVER). The temporal dynamics of the normalised FAPAR were analysed, enabling the detection of the fruit set stage. A new aggregative model was applied to data from hemispherical photographs taken under trees and within inter-rows, enabling us to quantify the impact of the inter-row at the Sentinel 2 pixel scale. The resulting value compared to BVs computed from Sentinel 2 gave statistically significant correlations (0.57 for FCOVER and 0.45 for FAPAR, with respective RMSE values of 0.12 and 0.11). Viticanopy appears promising for assessing the PAI (plant area index) and FCOVER for orchards with grassed inter-rows, showing significant correlations with the Sentinel 2 LAI (R2 of 0.72, RMSE 0.41) and FCOVER (R2 0.66 and RMSE 0.08). Overall, our results suggest that Sentinel 2 imagery can support orchard monitoring via indicators of development and inter-row management, offering data that are useful to quantify production and enhance resource management. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Schematic of the three approaches used to monitor orchard development at different spatial scales throughout the year (from tree level for phenological observations to watershed level using Sentinel 2 data).</p>
Full article ">Figure 2
<p>(<b>a</b>) Locations of the monitored orchards in the Ouvèze–Ventoux watershed (green points at right) and in the La Crau area (yellow points at left), (<b>b</b>) pictures of 2 cherry orchards (13 September and 22 July 2022): top, non-grassed orchard drip-irrigated by two rows of drippers and bottom, grassed orchard drip-irrigated in summer, (<b>c</b>) pictures of 2 orchards in La Crau (top, nectarine tree in spring 22 March 2023 and bottom, in summer 26 June 2022).</p>
Full article ">Figure 3
<p>(<b>a</b>) Main steps in processing the hemispherical photographs. (<b>b</b>) The three methods of data acquisition around the central tree. (<b>c</b>) Protocol used with hemispherical photographs. (<b>d</b>) Protocol used with the Viticanopy application, with 3 trees monitored in the four directions (blue arrows). (<b>e</b>) Protocols used with the ceptometer: P1 measured in the shadow of the trees and (blue) P2 in the inter-rows (black).</p>
Full article ">Figure 4
<p>Protocol for the monitoring of the phenological stages of cherry trees. (<b>a</b>) Phenology of cherry trees according to BBCH; (<b>b</b>) at plot scale, in an orchard, three trees in red monitored by observations (BBCH scale); (<b>c</b>) at tree scale, two locations are selected to classify flowering stage in the tree; and (<b>d</b>) flowering stage of a cherry tree in April 2022.</p>
Full article ">Figure 5
<p>Comparison of temporal profiles of Sentinel 2 LAI interpolated profile (black line) and PAI obtained from the ceptometer (blue line, P2 protocol) and Viticanopy (green line) for three orchards: (<b>a</b>) 3099 (cherry—grassed—Ouvèze), (<b>b</b>) 183 (cherry—non-grassed—Ouvèze), and (<b>c</b>) 4 (nectarine—La Crau) at the beginning of 2023.</p>
Full article ">Figure 6
<p>Comparison between Sentinel 2 LAI and PAI from (<b>a</b>) ceptometer measurements taken at all orchards of the two areas (La Crau and Ouvèze), (<b>b</b>) Viticanopy measurements at all orchards, and (<b>c</b>) Viticanopy measurements excluding 2 non-grassed orchards (183, 259). The black line represents the optimal correlation 1:1; the red line represents the results from linear regression.</p>
Full article ">Figure 7
<p>(<b>a</b>)—(top graphs) Proportion of tree (orange <span class="html-italic">100*FCOVER<sub>t</sub>/FCOVER<sub>c</sub></span>, see Equation (1)) and of inter-row (green <span class="html-italic">100*((1-FCOVER<sub>t</sub>)*FCOVER<sub>g</sub>)/FCOVER<sub>c</sub></span>) components computed from hemispherical photographs used to estimate FCOVER for two dates, 22 March 2022 (doy:81) and 21 June 2022 (doy 172), for all the monitored fields. (<b>b</b>)—(bottom graphs) For two plots, left, field 183.2 and right, field 3099.1, temporal variations in proportion of tree and inter-row components for the different observation dates in 2022.</p>
Full article ">Figure 8
<p>(<b>a</b>) Averaged percentage of grass contribution on FAPAR computed from hemispherical photographs according to Equation (1) for all grassed orchard plots in 2022. Examples of Sentinel 2 FAPAR dynamics (black lines) for plots at (<b>b</b>) non-grassed site 183 and (<b>c</b>) grassed site 1418. Initial values of FAPAR, as computed from BVNET, are provided in black. The green line represents adjusted FAPAR after subtracting the grass contribution (percentage obtained from hemispherical photographs). It corresponds to FAPAR only for the trees. The percentage of grass contribution is in red.</p>
Full article ">Figure 9
<p>Correlation between (<b>a</b>) FCOVER obtained from hemispherical photographs (from Equation (1)) for all orchards of the two studied areas and FCOVER from Sentinel 2 computed with BVNET (<b>b</b>) FAPAR from hemispherical photographs and FAPAR from Sentinel 2 for all orchards and for the 3 years. (<b>c</b>) Correlation between FCOVER from Viticanopy and Sentinel 2 for all orchards for the two areas, except 183 and 259. (<b>d</b>) Correlation between FCOVER from upward-aimed hemispherical photographs and from Viticanopy for all plots.</p>
Full article ">Figure 10
<p>(<b>a</b>) LAI temporal profiles obtained from BVNET applied to Sentinel 2 data averaged at plot and field scales (field 3099) for the year 2022 and (<b>b</b>) soil water stock (in mm in blue) computed at 0–50 cm using capacitive sensors (described in <a href="#sec2dot1-remotesensing-16-03393" class="html-sec">Section 2.1</a>), with rainfall recorded at the Carpentras station (see <a href="#app1-remotesensing-16-03393" class="html-app">Supplementary Part S1 and Table S1</a>).</p>
Full article ">Figure 11
<p>Time series of FCOVER (mean value at field scale) for the cherry trees in field 3099 in Ouvèze area from 2016 to 2023.</p>
Full article ">Figure 12
<p>Sentinel 2 FAPAR evolution in 2022 for two cherry tree fields, with the date of flowering observation (in green) and the date of fruit set observation (in red) for (<b>a</b>) plot 183 (non-grassed cherry trees) and (<b>b</b>) plot 3099 (grassed cherry trees).</p>
Full article ">Figure 13
<p>Variability in dates for the phenological stages of a cherry tree orchard (plot 3099) observed in 2022.</p>
Full article ">Figure 14
<p>(<b>a</b>) Normalised FAPAR computed for all observed cherry trees relative to observation dates for BBCH stages in the Ouvèze area in 2021 for five plots. (<b>b</b>) Map of dates distinguishing between flowering and fruit set stages for 2021 obtained by thresholding FAPAR images.</p>
Full article ">
23 pages, 10725 KiB  
Article
Leveraging Geospatial Information to Map Perceived Tenure Insecurity in Urban Deprivation Areas
by Esaie Dufitimana, Jiong Wang and Divyani Kohli-Poll Jonker
Land 2024, 13(9), 1429; https://doi.org/10.3390/land13091429 - 4 Sep 2024
Viewed by 1073
Abstract
Increasing tenure security is essential for promoting safe and inclusive urban development and achieving Sustainable Development Goals. However, assessment of tenure security relies on conventional census and survey statistics, which often fail to capture the dimension of perceived tenure insecurity. This perceived tenure [...] Read more.
Increasing tenure security is essential for promoting safe and inclusive urban development and achieving Sustainable Development Goals. However, assessment of tenure security relies on conventional census and survey statistics, which often fail to capture the dimension of perceived tenure insecurity. This perceived tenure insecurity is crucial as it influences local engagement and the effectiveness of policies. In many regions, particularly in the Global South, these conventional methods lack the necessary data to adequately measure perceived tenure insecurity. This study first used household survey data to derive variations in perceived tenure insecurity and then explored the potential of Very-High Resolution (VHR) satellite imagery and spatial data to assess these variations in urban deprived areas. Focusing on the city of Kigali, Rwanda, the study collected household survey data, which were analysed using Multiple Correspondence Analysis to capture variations of perceived tenure insecurity. In addition, VHR satellite imagery and spatial datasets were analysed to characterize urban deprivation. Finally, a Random Forest regression model was used to assess the relationship between variations of perceived tenure insecurity and the spatial characteristics of urban deprived areas. The findings highlight the potential of geospatial information to estimate variations in perceived tenure insecurity within urban deprived contexts. These insights can inform evidence-based decision-making by municipalities and stakeholders in urban development initiatives. Full article
(This article belongs to the Special Issue Digital Earth and Remote Sensing for Land Management)
Show Figures

Figure 1

Figure 1
<p>Map of Kigali city and the selected sites.</p>
Full article ">Figure 2
<p>Steps and process followed by the study.</p>
Full article ">Figure 3
<p>Characteristics of physical environment of neigborhoods across the study sites.</p>
Full article ">Figure 4
<p>Responses according to housing materials, building shapes, sizes, and access to basic amenities.</p>
Full article ">Figure 5
<p>Tenure rights based on land and/or property documentation, acquisition methods, and duration of occupation.</p>
Full article ">Figure 6
<p>Perceptions of respondents on tenure (in)security.</p>
Full article ">Figure 7
<p>Scatter plot of respondents in 2-dimensional space on the first and second dimension of MCA.</p>
Full article ">Figure 8
<p>Squared correlation indicators with the first dimension of MCA.</p>
Full article ">Figure 9
<p>The variation of perceived tenure insecurity across the study sites. A illustrates site of Gatsata (3), b illustrates sites of Kimisagara (2) and Gitega (1).</p>
Full article ">Figure 10
<p>Example of land cover classification results from the model (<b>Left</b>), GLCM texture features (<b>Right</b>).</p>
Full article ">Figure 11
<p>Variable importance based on image-based spatial characteristics extracted at the buffer of 20 m.</p>
Full article ">Figure 12
<p>Variable importance based on image-based spatial characteristics and additional spatial at the buffer of 25 m.</p>
Full article ">
17 pages, 12277 KiB  
Article
Is Your Training Data Really Ground Truth? A Quality Assessment of Manual Annotation for Individual Tree Crown Delineation
by Janik Steier, Mona Goebel and Dorota Iwaszczuk
Remote Sens. 2024, 16(15), 2786; https://doi.org/10.3390/rs16152786 - 30 Jul 2024
Cited by 5 | Viewed by 1261
Abstract
For the accurate and automatic mapping of forest stands based on very-high-resolution satellite imagery and digital orthophotos, precise object detection at the individual tree level is necessary. Currently, supervised deep learning models are primarily applied for this task. To train a reliable model, [...] Read more.
For the accurate and automatic mapping of forest stands based on very-high-resolution satellite imagery and digital orthophotos, precise object detection at the individual tree level is necessary. Currently, supervised deep learning models are primarily applied for this task. To train a reliable model, it is crucial to have an accurate tree crown annotation dataset. The current method of generating these training datasets still relies on manual annotation and labeling. Because of the intricate contours of tree crowns, vegetation density in natural forests and the insufficient ground sampling distance of the imagery, manually generated annotations are error-prone. It is unlikely that the manually delineated tree crowns represent the true conditions on the ground. If these error-prone annotations are used as training data for deep learning models, this may lead to inaccurate mapping results for the models. This study critically validates manual tree crown annotations on two study sites: a forest-like plantation on a cemetery and a natural city forest. The validation is based on tree reference data in the form of an official tree register and tree segments extracted from UAV laser scanning (ULS) data for the quality assessment of a training dataset. The validation results reveal that the manual annotations detect only 37% of the tree crowns in the forest-like plantation area and 10% of the tree crowns in the natural forest correctly. Furthermore, it is frequent for multiple trees to be interpreted in the annotation as a single tree at both study sites. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Mapping informal settlements in Nairobi, Kenya with manual annotations. Each colored line indicates a different annotator’s delineation of the same area [<a href="#B16-remotesensing-16-02786" class="html-bibr">16</a>]: (<b>a</b>) boundary deviation due to generalization of informal settlements and (<b>b</b>) deviation resulting from inclusion or exclusion of fringe [<a href="#B26-remotesensing-16-02786" class="html-bibr">26</a>] (adapted from Elemes et al. [<a href="#B16-remotesensing-16-02786" class="html-bibr">16</a>] with permission from Kohli et al. [<a href="#B26-remotesensing-16-02786" class="html-bibr">26</a>]).</p>
Full article ">Figure 2
<p>The four validation areas (red outlines) of study site 1.</p>
Full article ">Figure 3
<p>Nadir 3D point cloud in RGB color scheme (<b>a</b>) and derived 2D segments (<b>b</b>), which represent the single tree reference data for the validation process of study site 2.</p>
Full article ">Figure 4
<p>Example annotation images with 512 × 512 pixel resolution based on the digital orthophoto (<b>a</b>) and the satellite image from WorldView-3 (<b>b</b>).</p>
Full article ">
19 pages, 5497 KiB  
Review
Earth Observation—An Essential Tool towards Effective Aquatic Ecosystems’ Management under a Climate in Change
by Filipe Lisboa, Vanda Brotas and Filipe Duarte Santos
Remote Sens. 2024, 16(14), 2597; https://doi.org/10.3390/rs16142597 - 16 Jul 2024
Cited by 1 | Viewed by 1383
Abstract
Numerous policies have been proposed by international and supranational institutions, such as the European Union, to surveil Earth from space and furnish indicators of environmental conditions across diverse scenarios. In tandem with these policies, different initiatives, particularly on both sides of the Atlantic, [...] Read more.
Numerous policies have been proposed by international and supranational institutions, such as the European Union, to surveil Earth from space and furnish indicators of environmental conditions across diverse scenarios. In tandem with these policies, different initiatives, particularly on both sides of the Atlantic, have emerged to provide valuable data for environmental management such as the concept of essential climate variables. However, a key question arises: do the available data align with the monitoring requirements outlined in these policies? In this paper, we concentrate on Earth Observation (EO) optical data applications for environmental monitoring, with a specific emphasis on ocean colour. In a rapidly changing climate, it becomes imperative to consider data requirements for upcoming space missions. We place particular significance on the application of these data when monitoring lakes and marine protected areas (MPAs). These two use cases, albeit very different in nature, underscore the necessity for higher-spatial-resolution imagery to effectively study these vital habitats. Limnological ecosystems, sensitive to ice melting and temperature fluctuations, serve as crucial indicators of a climate in change. Simultaneously, MPAs, although generally small in size, play a crucial role in safeguarding marine biodiversity and supporting sustainable marine resource management. They are increasingly acknowledged as a critical component of global efforts to conserve and manage marine ecosystems, as exemplified by Target 3 of the Kunming–Montreal Global Biodiversity Framework (GBF), which aims to effectively conserve 30% of terrestrial, inland water, coastal, and marine areas by 2030 through protected areas and other conservation measures. In this paper, we analysed different policies concerning EO data and their application to environmental-based monitoring. We also reviewed and analysed the existing relevant literature in order to find gaps that need to be bridged to effectively monitor these habitats in an ecosystem-based approach, making data more accessible, leading to the generation of water quality indicators derived from new high- and very high-resolution satellite monitoring focusing especially on Chlorophyll-a concentrations. Such data are pivotal for comprehending, at small and local scales, how these habitats are responding to climate change and various stressors. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Overview of the three Copernicus components: space, in situ and services. Contribution missions are not part of the space component. They can be included directly in the provision of Copernicus services.</p>
Full article ">Figure 2
<p>Overview of the resolutions available for Copernicus Space segment and contributing missions. VHR resolutions, needed for the monitoring of very small lakes and MPAs are only available through contributing missions. All satellites in blue, green and orange are privately owned.</p>
Full article ">Figure 3
<p>From the 56 ECVs, 36 (highlighted in green) rely on EO data. Some of the concerned organisations are depicted on the top right of each ECV. The concerned organisations are given as examples since many more are involved in providing products for each ECV. For example, nine organisations are involved in providing data for the Precipitation ECV. Only NASA manages the Lightening ECV and only ESA is responsible for the Ocean Colour ECV. The star represents a future engagement of ESA in the Permafrost ECV.</p>
Full article ">Figure 4
<p>Results from the Web of Science query.</p>
Full article ">Figure 5
<p>VOSviewer map of co-occurrences in the 8054 abstracts analysed, showing how lake research and satellite remote sensing are close subjects.</p>
Full article ">Figure 6
<p>VOSviewer map of co-occurrences in the 231 abstracts analysed, showing how marine protected areas and satellite remote sensing are not co-occurring in the scientific literature. The map can be accessed through the QR code with the possibility to explore the clusters.</p>
Full article ">
24 pages, 96595 KiB  
Article
Modified ESRGAN with Uformer for Video Satellite Imagery Super-Resolution
by Kinga Karwowska and Damian Wierzbicki
Remote Sens. 2024, 16(11), 1926; https://doi.org/10.3390/rs16111926 - 27 May 2024
Viewed by 1388
Abstract
In recent years, a growing number of sensors that provide imagery with constantly increasing spatial resolution are being placed on the orbit. Contemporary Very-High-Resolution Satellites (VHRS) are capable of recording images with a spatial resolution of less than 0.30 m. However, until now, [...] Read more.
In recent years, a growing number of sensors that provide imagery with constantly increasing spatial resolution are being placed on the orbit. Contemporary Very-High-Resolution Satellites (VHRS) are capable of recording images with a spatial resolution of less than 0.30 m. However, until now, these scenes were acquired in a static way. The new technique of the dynamic acquisition of video satellite imagery has been available only for a few years. It has multiple applications related to remote sensing. However, in spite of the offered possibility to detect dynamic targets, its main limitation is the degradation of the spatial resolution of the image that results from imaging in video mode, along with a significant influence of lossy compression. This article presents a methodology that employs Generative Adversarial Networks (GAN). For this purpose, a modified ESRGAN architecture is used for the spatial resolution enhancement of video satellite images. In this solution, the GAN network generator was extended by the Uformer model, which is responsible for a significant improvement in the quality of the estimated SR images. This enhances the possibilities to recognize and detect objects significantly. The discussed solution was tested on the Jilin-1 dataset and it presents the best results for both the global and local assessment of the image (the mean values of the SSIM and PSNR parameters for the test data were, respectively, 0.98 and 38.32 dB). Additionally, the proposed solution, in spite of the fact that it employs artificial neural networks, does not require a high computational capacity, which means it can be implemented in workstations that are not equipped with graphic processors. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Diagram of enhancement of spatial resolution of a single video frame.</p>
Full article ">Figure 2
<p>Discriminator model [<a href="#B58-remotesensing-16-01926" class="html-bibr">58</a>].</p>
Full article ">Figure 3
<p>The flowchart of the algorithm.</p>
Full article ">Figure 4
<p>Examples of images from test data with quality results are shown in <a href="#remotesensing-16-01926-t003" class="html-table">Table 3</a>: (<b>a</b>) HR image, (<b>b</b>) MCWESRGAN with Uformer, (<b>c</b>) MCWESRGAN with Lucy–Richardson Algorithm, and (<b>d</b>) MCWESRGAN with Wiener deconvolution.</p>
Full article ">Figure 5
<p>Structural similarity between the estimated images (tiles) (SR) and the reference HR images.</p>
Full article ">Figure 6
<p>Peak signal-to-noise ratio (PSNR [dB]) between the estimated images (tiles) (SR) and the reference HR images.</p>
Full article ">Figure 7
<p>Local assessment—SSIM metrics (for the evaluated field of the size of 20 × 20 pixels).</p>
Full article ">Figure 8
<p>Local assessment—PSNR metrics (for the evaluated field of the size of 20 × 20 pixels).</p>
Full article ">Figure 9
<p>PSD diagram on the x and y directions for a sample image.</p>
Full article ">Figure 10
<p>Images in the frequency domain.</p>
Full article ">
Back to TopTop