[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (319)

Search Parameters:
Keywords = very high resolution satellite imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 9000 KiB  
Article
Five-Year Evaluation of Sentinel-2 Cloud-Free Mosaic Generation Under Varied Cloud Cover Conditions in Hawai’i
by Francisco Rodríguez-Puerta, Ryan L. Perroy, Carlos Barrera, Jonathan P. Price and Borja García-Pascual
Remote Sens. 2024, 16(24), 4791; https://doi.org/10.3390/rs16244791 (registering DOI) - 22 Dec 2024
Abstract
The generation of cloud-free satellite mosaics is essential for a range of remote sensing applications, including land use mapping, ecosystem monitoring, and resource management. This study focuses on remote sensing across the climatic diversity of Hawai’i Island, which encompasses ten Köppen climate zones [...] Read more.
The generation of cloud-free satellite mosaics is essential for a range of remote sensing applications, including land use mapping, ecosystem monitoring, and resource management. This study focuses on remote sensing across the climatic diversity of Hawai’i Island, which encompasses ten Köppen climate zones from tropical to Arctic: periglacial. This diversity presents unique challenges for cloud-free image generation. We conducted a comparative analysis of three cloud-masking methods: two Google Earth Engine algorithms (CloudScore+ and s2cloudless) and a new proprietary deep learning-based algorithm (L3) applied to Sentinel-2 imagery. These methods were evaluated against the best monthly composite selected from high-frequency Planet imagery, which acquires daily images. All Sentinel-2 bands were enhanced to a 10 m resolution, and an advanced weather mask was applied to generate monthly mosaics from 2019 to 2023. We stratified the analysis by cloud cover frequency (low, moderate, high, and very high), applying one-way and two-way ANOVAs to assess cloud-free pixel success rates. Results indicate that CloudScore+ achieved the highest success rate at 89.4% cloud-free pixels, followed by L3 and s2cloudless at 79.3% and 80.8%, respectively. Cloud removal effectiveness decreased as cloud cover increased, with clear pixel success rates ranging from 94.6% under low cloud cover to 79.3% under very high cloud cover. Additionally, seasonality effects showed higher cloud removal rates in the wet season (88.6%), while no significant year-to-year differences were observed from 2019 to 2023. This study advances current methodologies for generating reliable cloud-free mosaics in tropical and subtropical regions, with potential applications for remote sensing in other cloud-dense environments. Full article
(This article belongs to the Special Issue Advances in Deep Learning Approaches in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Island of Hawai’i: (<b>a</b>) Köppen climate map adapted from Mauna Loa Observatory Report [<a href="#B40-remotesensing-16-04791" class="html-bibr">40</a>] and (<b>b</b>) the annual mean cloud cover percentage based on the Hawai’i Climate Atlas [<a href="#B39-remotesensing-16-04791" class="html-bibr">39</a>].</p>
Full article ">Figure 2
<p>Cloud weight computation for a single Sentinel-2 image, showing different spatial resolutions used during the process.</p>
Full article ">Figure 3
<p>Cloud cover stratification across the island of Hawai’i, derived from data provided by the Hawaiian Climate Atlas [<a href="#B39-remotesensing-16-04791" class="html-bibr">39</a>]. The black squares represent the locations of the evaluated blocks (a total of 240 blocks, each covering 100 hectares) within each cloud cover stratum (60 blocks allocated to each cloud cover stratum). Yellow stars indicate the spatial distribution of the sample blocks presented in Results.</p>
Full article ">Figure 4
<p>Example of a visual inspection of a moderate cloud cover block (ID = 13,020): Panel (<b>A</b>) shows mosaics of the evaluated masks: (<b>1</b>) CloudScore+, (<b>2</b>) s2cloudness, (<b>3</b>) L3, and (<b>4</b>) Planet reference image for comparison. Panel (<b>B</b>) shows the results after visual inspection, with “clear” pixels colored purple, “cloudy” pixels colored orange, and “no data” pixels colored gray.</p>
Full article ">Figure 5
<p>Visual examples of cloud detection results under varying cloud cover conditions (low, moderate, high, and very high) for CloudScore+, s2cloudless, and L3 algorithms. This figure is divided into four quadrants, each representing a specific cloud cover level. Within each quadrant, individual 1 × 1 km blocks are ordered from left to right according to the accuracy quartiles of the L3 mask, with the leftmost column showing the highest accuracy quartile (Q1) and the rightmost column representing the lowest accuracy quartile (Q4). Rows correspond to the evaluated masks (top to bottom: CloudScore+, s2cloudless, L3) and PlanetScope imagery as ground truth. Errors (clouds) are displayed as white areas, while “no data” pixels appear as large gray grid patterns. The spatial locations of these blocks are indicated in <a href="#remotesensing-16-04791-f003" class="html-fig">Figure 3</a> by yellow stars.</p>
Full article ">Figure 6
<p>Box plots showing percentages of success (“clear”) in purple, errors (“cloudy”) in orange, and “no data” in gray for the one-way factors analyzed: (<b>top left</b>)—mask, (<b>top right</b>)—season, (<b>bottom left</b>)—year, and <b>(bottom right</b>)—cloud cover.</p>
Full article ">Figure 7
<p>Box plots showing the percentages of success (“clear”) in purple, errors (“cloudy”) in orange, and “no data” in gray for each mask and cloud cover level combination (low, moderate, high, and very high).</p>
Full article ">Figure 8
<p>Box plots showing the percentages of success (“clear”) in purple, errors (“cloudy”) in orange, and “no data” in gray for each combination of mask, year, and season.</p>
Full article ">Figure 9
<p>Pairwise comparison of success rates (“clear” pixels) across different levels of cloud cover and mask algorithms, using Tukey’s HSD Test. From top to bottom, the cloud cover levels are ordered as low, moderate, high, and very high. Statistically significant differences are marked with an asterisk, and the superior mask (indicated by a higher percentage of “clear” pixels) is underlined. Abbreviations: s2cl = s2cloudless mask, CS = CloudScore+.</p>
Full article ">Figure 10
<p>Pairwise comparison of error rates (“cloudy” pixels) across different levels of cloud cover and mask algorithms, using Tukey’s HSD Test. From top to bottom, cloud cover levels are ordered as low, moderate, high, and very high. Statistically significant differences are marked with an asterisk, and the superior mask (indicated by a lower percentage of “cloudy” pixels) is underlined. Abbreviations: s2cl = s2cloudless mask, CS = CloudScore+.</p>
Full article ">Figure 11
<p>Pairwise comparison of “no data” percentage across different levels of cloud cover and mask algorithms, using Tukey’s HSD Test. From top to bottom, cloud cover levels are ordered as low, moderate, high, and very high. Statistically significant differences are marked with an asterisk, and the mask with fewer “no data” pixels is underlined. Abbreviations: s2cl = s2cloudless mask, CS = CloudScore+.</p>
Full article ">
21 pages, 10857 KiB  
Article
Application of PlanetScope Imagery for Flood Mapping: A Case Study in South Chickamauga Creek, Chattanooga, Tennessee
by Mithu Chanda and A. K. M. Azad Hossain
Remote Sens. 2024, 16(23), 4437; https://doi.org/10.3390/rs16234437 - 27 Nov 2024
Viewed by 738
Abstract
Floods stand out as one of the most expensive natural calamities, causing harm to both lives and properties for millions of people globally. The increasing frequency and intensity of flooding underscores the need for accurate and timely flood mapping methodologies to enhance disaster [...] Read more.
Floods stand out as one of the most expensive natural calamities, causing harm to both lives and properties for millions of people globally. The increasing frequency and intensity of flooding underscores the need for accurate and timely flood mapping methodologies to enhance disaster preparedness and response. Earth observation data obtained through satellites offer comprehensive and recurring perspectives of areas that may be prone to flooding. This paper shows the suitability of high-resolution PlanetScope imagery as an efficient and accessible approach for flood mapping through a case study in South Chickamauga Creek (SCC), Chattanooga, Tennessee, focusing on a significant flooding event in 2020. The extent of the flood water was delineated and mapped using image classification and density slicing of Normalized Difference Water Index (NDWI). The obtained results indicate that PlanetScope imagery performed well in flood mapping for a narrow creek like SCC, achieving an overall accuracy of more than 90% and a Kappa coefficient of over 0.80. The findings of this research contribute to a better understanding of the flood event in Chattanooga and demonstrate that PlanetScope imagery can be utilized as a very useful resource for accurate and timely flood mapping of streams with narrow widths. Full article
(This article belongs to the Special Issue Remote Sensing of Floods: Progress, Challenges and Opportunities)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location map of the study site (referenced to Hamilton County, TN; Catoosa, and Walker County, GA) on an ESRI base map available on ArcGIS Pro software (version 3.1.3). The red color on the lower left side of the image indicates the study area and the right image presents the boundary of South Chickamauga Creek for this study.</p>
Full article ">Figure 2
<p>Schematic workflow of flood mapping using PlanetScope imagery.</p>
Full article ">Figure 3
<p>(<b>a</b>) Landsat 8 OLI and (<b>b</b>) PlanetScope satellite images of pre-flood conditions of the study site and a detailed pixel image of the creek width of SCC. Near-infrared bands were used to make maps for both Landsat and PlanetScope imagery.</p>
Full article ">Figure 4
<p>PlanetScope satellite images of pre- and post-flood conditions of the study site, respectively. (<b>a</b>,<b>b</b>): True color image display; (<b>c</b>,<b>d</b>): False color composite display (Green, Red, and NIR bands).</p>
Full article ">Figure 5
<p>The red color indicates the random pixel for accuracy assessment of (<b>a</b>) pre- flood (<b>b</b>) post-flood conditions. The background images are shown in true color.</p>
Full article ">Figure 6
<p>NDWI classified thematic maps of (<b>a</b>) pre- and (<b>b</b>) post-flood conditions of the study site.</p>
Full article ">Figure 7
<p>Unsupervised classified thematic maps of (<b>a</b>) pre- and (<b>b</b>) post-flood conditions of the study site.</p>
Full article ">Figure 8
<p>Major flood-affected areas of SCC using (<b>i</b>,<b>iii</b>) Density slicing of NDWI image and (<b>ii</b>,<b>iv</b>) unsupervised classification.</p>
Full article ">Figure 9
<p>Total water-covered areas of study site in pre-flood and post-flood conditions.</p>
Full article ">
24 pages, 6941 KiB  
Article
Discriminating Seagrasses from Green Macroalgae in European Intertidal Areas Using High-Resolution Multispectral Drone Imagery
by Simon Oiry, Bede Ffinian Rowe Davies, Ana I. Sousa, Philippe Rosa, Maria Laura Zoffoli, Guillaume Brunier, Pierre Gernez and Laurent Barillé
Remote Sens. 2024, 16(23), 4383; https://doi.org/10.3390/rs16234383 - 23 Nov 2024
Viewed by 755
Abstract
Coastal areas support seagrass meadows, which offer crucial ecosystem services, including erosion control and carbon sequestration. However, these areas are increasingly impacted by human activities, leading to habitat fragmentation and seagrass decline. In situ surveys, traditionally performed to monitor these ecosystems, face limitations [...] Read more.
Coastal areas support seagrass meadows, which offer crucial ecosystem services, including erosion control and carbon sequestration. However, these areas are increasingly impacted by human activities, leading to habitat fragmentation and seagrass decline. In situ surveys, traditionally performed to monitor these ecosystems, face limitations on temporal and spatial coverage, particularly in intertidal zones, prompting the addition of satellite data within monitoring programs. Yet, satellite remote sensing can be limited by too coarse spatial and/or spectral resolutions, making it difficult to discriminate seagrass from other macrophytes in highly heterogeneous meadows. Drone (unmanned aerial vehicle—UAV) images at a very high spatial resolution offer a promising solution to address challenges related to spatial heterogeneity and the intrapixel mixture. This study focuses on using drone acquisitions with a ten spectral band sensor similar to that onboard Sentinel-2 for mapping intertidal macrophytes at low tide (i.e., during a period of emersion) and effectively discriminating between seagrass and green macroalgae. Nine drone flights were conducted at two different altitudes (12 m and 120 m) across heterogeneous intertidal European habitats in France and Portugal, providing multispectral reflectance observation at very high spatial resolution (8 mm and 80 mm, respectively). Taking advantage of their extremely high spatial resolution, the low altitude flights were used to train a Neural Network classifier to discriminate five taxonomic classes of intertidal vegetation: Magnoliopsida (Seagrass), Chlorophyceae (Green macroalgae), Phaeophyceae (Brown algae), Rhodophyceae (Red macroalgae), and benthic Bacillariophyceae (Benthic diatoms), and validated using concomitant field measurements. Classification of drone imagery resulted in an overall accuracy of 94% across all sites and images, covering a total area of 467,000 m2. The model exhibited an accuracy of 96.4% in identifying seagrass. In particular, seagrass and green algae can be discriminated. The very high spatial resolution of the drone data made it possible to assess the influence of spatial resolution on the classification outputs, showing a limited loss in seagrass detection up to about 10 m. Altogether, our findings suggest that the MultiSpectral Instrument (MSI) onboard Sentinel-2 offers a relevant trade-off between its spatial and spectral resolution, thus offering promising perspectives for satellite remote sensing of intertidal biodiversity over larger scales. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Location of drone flights in France and Portugal. (<b>A</b>) Gulf of Morbihan (Two sites), (<b>B</b>) Bourgneuf Bay (Two sites), and (<b>C</b>) Ria de Aveiro Coastal Lagoon (Three sites). The golden areas represent the intertidal zone.</p>
Full article ">Figure 2
<p>The five taxonomic classes of vegetation used to train the Neural Network model and an example of their raw spectral signatures at the spectral resolution of the Micasense RedEdge Dual MX. (<b>A</b>): Magnoliopsida (<span class="html-italic">Zostera noltei</span>); (<b>B</b>): Phaeophyceae (<span class="html-italic">Fucus</span> sp.); (<b>C</b>): Rhodophyceae (<span class="html-italic">Gracilaria vermiculophylla</span>); (<b>D</b>): Chlorophyceae (<span class="html-italic">Ulva</span> sp.); (<b>E</b>): Bacillariophyceae (Benthic diatoms). (<b>F</b>): Spectral signature of each vegetation class. Classes and species taxonomy following the WORMS—World Register of Marine Species classification.</p>
Full article ">Figure 3
<p>Schematic representation of the workflow. Parallelograms represent input or output data, and rectangles represent Python processing algorithms. The overall workflow of this study is divided into two distinct parts based on the spatial resolution of the drone flights: high-resolution flights (pixel size: 8 mm) were utilized for training and prediction of the Neural Network model, whereas lower-resolution flights (pixel size: 80 mm) were solely employed for prediction and validation purposes. Validation has been performed on both high- and low-resolution flights.</p>
Full article ">Figure 4
<p>Comparison of reflectance retrieved from both low-altitude and high-altitude flights over a common area. The black dashed line represents a 1 to 1 relationship. The left (<b>A</b>) plots raw data, and the right (<b>B</b>) plots standardized data (Equation (1)).</p>
Full article ">Figure 5
<p>RGB ortho-mosaic (<b>Left</b>) and Prediction (<b>Right</b>) of the low altitude flight of Gafanha, Portugal. The total extent of this flight was 3000 m<sup>2</sup> with a resolution of 8 mm per pixel. The zoom covers an area equivalent to a 10 m Sentinel-2 pixel size.</p>
Full article ">Figure 6
<p>RGB ortho-mosaic (<b>Left</b>) and Prediction (<b>Right</b>) of the high-altitude flight of Gafanha, Portugal. The total extent of this flight was about 1 km<sup>2</sup> with a resolution of 80 mm per pixel. The yellow outline shows the extent of Gafanha’s low-altitude flight, as presented in <a href="#remotesensing-16-04383-f005" class="html-fig">Figure 5</a>. The zoom covers an area equivalent to a 10 m Sentinel-2 pixel size.</p>
Full article ">Figure 7
<p>RGB ortho-mosaic (<b>Top</b>) and Prediction (<b>Bottom</b>) of the flight made in the inner part of Ria de Aveiro coastal lagoon, Portugal. The total extent of this flight was about 1.5 km<sup>2</sup> with a resolution of 80 mm per pixel. The zoom inserts cover an area equivalent to the size of a 10 m Sentinel-2 pixel.</p>
Full article ">Figure 8
<p>RGB ortho-mosaic (<b>Top</b>) and Prediction (<b>Bottom</b>) of L’Epine, France. The total extent of this flight was about 28,000 m<sup>2</sup> with a resolution of 80 mm per pixel. The zoom covers an area equivalent to a 10 m Sentinel-2 pixel size.</p>
Full article ">Figure 9
<p>A global confusion matrix on the left is derived from validation data across each flight, while a mosaic of confusion matrices from individual flights is presented on the right. The labels inside the matrices indicate the balanced accuracy for each class. The labels at the bottom of the global matrix indicate the User’s accuracy for each class, and those on the right indicate the Producer’s Accuracy. The values adjacent to the names of each site represent the proportion of total pixels from that site contributing to the overall matrix. Grey lines within the mosaic indicate the absence of validation data for the class at that site. The table at the bottom summarizes the Sensitivity, Specificity, and Accuracy for each class and for the overall model.</p>
Full article ">Figure 10
<p>Variable Importance of the Neural Network Classifier for each taxonomic class. The longer the slice, the more important the variable for prediction of each class. The right plot shows the drone raw and standardized reflectance spectra of each class. Each slice represents the Variable Importance (VI) of both raw and standardized reflectance combined.</p>
Full article ">Figure 11
<p>Predicted area loss for different vegetation types (green algae, seagrass, brown algae, and red algae) as a function of spatial resolution. The lines represent Generalized Linear Model (GLM) predictions, and shaded areas indicate standard errors. As the resolution decreases, predicted area loss increases for all vegetation types, with green algae showing the highest loss and seagrass the smallest at coarser resolutions.</p>
Full article ">Figure 12
<p>Kernel density plot showing the proportion of pixels well classified based on the percent cover of the class in high-altitude flight pixels of Gafanha, Portugal. Each subplot shows all the pixels of the same classes on the high-altitude flight. The cover (%) of classes was retrieved using the result of the classification of the low-altitude flight in Gafanha, Portugal.</p>
Full article ">Figure 13
<p>Photosynthetic and carotenoid pigments present (Green) or absent (Red) in each taxonomic class present in the Neural Network Classifier, along with their absorption wavelength measured with spectroradiometer, Chl-b—chlorophyll-b, Chl-c—chlorophyll-c, Fuco—fucoxanthin, Zea—zeaxanthin, Diad—diadinoxanthin, Lut—lutein, Neo—neoxanthin, PE—phycoerythrin, PC—phycocyanin; [<a href="#B25-remotesensing-16-04383" class="html-bibr">25</a>,<a href="#B26-remotesensing-16-04383" class="html-bibr">26</a>,<a href="#B54-remotesensing-16-04383" class="html-bibr">54</a>,<a href="#B55-remotesensing-16-04383" class="html-bibr">55</a>,<a href="#B56-remotesensing-16-04383" class="html-bibr">56</a>].</p>
Full article ">Figure 14
<p>Sample of <a href="#remotesensing-16-04383-f009" class="html-fig">Figure 9</a> focusing on green macrophytes. The labels inside the matrix indicate the number of pixels.</p>
Full article ">
18 pages, 16650 KiB  
Article
Mapping Seagrass Distribution and Abundance: Comparing Areal Cover and Biomass Estimates Between Space-Based and Airborne Imagery
by Victoria J. Hill, Richard C. Zimmerman, Dorothy A. Byron and Kenneth L. Heck
Remote Sens. 2024, 16(23), 4351; https://doi.org/10.3390/rs16234351 - 21 Nov 2024
Viewed by 599
Abstract
This study evaluated the effectiveness of Planet satellite imagery in mapping seagrass coverage in Santa Rosa Sound, Florida. We compared very-high-resolution aerial imagery (0.3 m) collected in September 2022 with high-resolution Planet imagery (~3 m) captured during the same period. Using supervised classification [...] Read more.
This study evaluated the effectiveness of Planet satellite imagery in mapping seagrass coverage in Santa Rosa Sound, Florida. We compared very-high-resolution aerial imagery (0.3 m) collected in September 2022 with high-resolution Planet imagery (~3 m) captured during the same period. Using supervised classification techniques, we accurately identified expansive, continuous seagrass meadows in the satellite images, successfully classifying 95.5% of the 11.18 km2 of seagrass area delineated manually from the aerial imagery. Our analysis utilized an occurrence frequency (OF) product, which was generated by processing ten clear-sky images collected between 8 and 25 September 2022 to determine the frequency with which each pixel was classified as seagrass. Seagrass patches encompassing at least nine pixels (~200 m2) were almost always detected by our classification algorithm. Using an OF threshold equal to or greater than >60% provided a high level of confidence in seagrass presence while effectively reducing the impact of small misclassifications, often of individual pixels, that appeared sporadically in individual images. The image-to-image uncertainty in seagrass retrieval from the satellite images was 0.1 km2 or 2.3%, reflecting the robustness of our classification method and allowing confidence in the accuracy of the seagrass area estimate. The satellite-retrieved leaf area index (LAI) was consistent with previous in situ measurements, leading to the estimate that 2700 tons of carbon per year are produced by the Santa Rosa Sound seagrass ecosystem, equivalent to a drawdown of approximately 10,070 tons of CO2. This satellite-based approach offers a cost-effective, semi-automated, and scalable method of assessing the distribution and abundance of submerged aquatic vegetation that provides numerous ecosystem services. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>). The location of Pensacola Bay is indicated by the red box. (<b>B</b>). The location of Santa Rosa Sound is indicated by the red outline. Underlying Ocean basemap from Esri.ArcGIS Pro 3.3.2. Sources: Esri.Arc, GEBCO, NOAA, National Geographic, DeLorme, HERE, Geonames.org, and other contributors.</p>
Full article ">Figure 2
<p>Flowchart outlining the processing steps for satellite and aerial imagery.</p>
Full article ">Figure 3
<p>Aerial imagery with areas identified as containing seagrass overlaid as polygons (red). The white-dashed box is the location of image overlap used in uncertainty estimates; solid white boxes numbered 1 through 5 are the locations of examples shown later figures. Green dots highlight the locations of East Sabine and Big Sabine Point, mentioned later in the text.</p>
Full article ">Figure 4
<p>Seagrass area polygons derived from aerial imagery (black lines), and Planet-identified seagrass using <span class="html-italic">OF</span> thresholds of ≥60% and ≥90% overlaid on aerial imagery. (<b>A</b>) Subset of aerial imagery highlighted as Box 3 in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>; (<b>B</b>) subset of aerial imagery highlighted as Box 5 in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Proportion of false-negative area by polygon size (with <span class="html-italic">OF</span> ≥ 60%) for aerial polygons where zero Planet pixels were identified as seagrass.</p>
Full article ">Figure 6
<p>Previous (black) seagrass areal extent for Santa Rosa Sound based on historic data [<a href="#B32-remotesensing-16-04351" class="html-bibr">32</a>,<a href="#B33-remotesensing-16-04351" class="html-bibr">33</a>] and 2022 estimate (red) derived from this analysis. Historical area generated by setting the average patchy density at 50% and summing continuous + patchy × 0.5 areas provided in the literature.</p>
Full article ">Figure 7
<p>(<b>A</b>) RGB representation of a subset of aerial imagery showing the large continuous seagrass meadow at Big Sabine Point in the middle of Santa Rosa Sound (see <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>, box 1). (<b>B</b>) Seagrass <span class="html-italic">OF</span> derived from all satellite images overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Mean leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated for <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">Figure 8
<p>(<b>A</b>) RGB representation of a subset of aerial imagery showing a seagrass meadow along the north shore of Santa Rosa Sound. The location of this site is shown in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>, box 2. (<b>B</b>) Planet-derived <span class="html-italic">OF</span> overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated for <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">Figure 9
<p>(<b>A</b>). Subset of RGB aerial images just west of Navarre Bridge; the location of this site is shown in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>, box 3. Seagrass meadows along the shore and in the middle of Santa Rosa Sound were obscured by suspended sediment plumes in the aerial image (<b>B</b>). Planet-derived <span class="html-italic">OF</span> overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated from <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">Figure 10
<p>(<b>A</b>) Subset of RGB aerial images showing an example of seagrass distribution over a shallow sand bank along the southern shore of Santa Rosa Sound. The location of this site is shown in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a> box 4; the white arrow points to shallow sand with small seagrass patches. (<b>B</b>). Planet-derived <span class="html-italic">OF</span> overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated for <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">
23 pages, 7255 KiB  
Article
Exploring the Relationship Between Very-High-Resolution Satellite Imagery Data and Fruit Count for Predicting Mango Yield at Multiple Scales
by Benjamin Adjah Torgbor, Priyakant Sinha, Muhammad Moshiur Rahman, Andrew Robson, James Brinkhoff and Luz Angelica Suarez
Remote Sens. 2024, 16(22), 4170; https://doi.org/10.3390/rs16224170 - 8 Nov 2024
Viewed by 921
Abstract
Tree- and block-level prediction of mango yield is important for farm operations, but current manual methods are inefficient. Previous research has identified the accuracies of mango yield forecasting using very-high-resolution (VHR) satellite imagery and an ’18-tree’ stratified sampling method. However, this approach still [...] Read more.
Tree- and block-level prediction of mango yield is important for farm operations, but current manual methods are inefficient. Previous research has identified the accuracies of mango yield forecasting using very-high-resolution (VHR) satellite imagery and an ’18-tree’ stratified sampling method. However, this approach still requires infield sampling to calibrate canopy reflectance and the derived block-level algorithms are unable to translate to other orchards due to the influences of abiotic and biotic conditions. To better appreciate these influences, individual tree yields and corresponding canopy reflectance properties were collected from 2015 to 2021 for 1958 individual mango trees from 55 orchard blocks across 14 farms located in three mango growing regions of Australia. A linear regression analysis of the block-level data revealed the non-existence of a universal relationship between the 24 vegetation indices (VIs) derived from VHR satellite data and fruit count per tree, an outcome likely due to the influence of location, season, management and cultivar. The tree-level fruit count predicted using a random forest (RF) model trained on all calibration data produced a percentage root mean squared error (PRMSE) of 26.5% and a mean absolute error (MAE) of 48 fruits/tree. The lowest PRMSEs produced from RF-based models developed from location, season and cultivar subsets at the individual tree level ranged from 19.3% to 32.6%. At the block level, the PRMSE for the combined model was 10.1% and the lowest values for the location, seasonal and cultivar subset models varied between 7.2% and 10.0% upon validation. Generally, the block-level predictions outperformed the individual tree-level models. Maps were produced to provide mango growers with a visual representation of yield variability across orchards. This enables better identification and management of the influence of abiotic and biotic constraints on production. Future research could investigate the causes of spatial yield variability in mango orchards. Full article
Show Figures

Figure 1

Figure 1
<p>Location of mango farms in the three mango growing regions of Australia.</p>
Full article ">Figure 2
<p>Flowchart showing the sequence of procedure steps used in this study to generate the results.</p>
Full article ">Figure 3
<p>Example of 18 tree locations on the classified NDVI map (<b>a</b>) and on the ESRI basemap image (<b>b</b>). The points with L, M and H prefixes represent the different tree vigour classes of low, medium and high, respectively.</p>
Full article ">Figure 4
<p>Summary of fruits counted (<b>a</b>) per farm and (<b>b</b>) heterogeneity of cultivar yield distribution from 2015 to 2021. The numerical values and black dots associated with each boxplot represent the number of trees of that particular cultivar and outliers, respectively.</p>
Full article ">Figure 5
<p>Correlation between fruit count and the 24 VIs using the entire datasets of 1958 datapoints. The green and red colour ramps show the strength and direction of the correlation being positive and negative, respectively.</p>
Full article ">Figure 6
<p>Distribution of slopes for CIRE_1 with average slope and standard deviation.</p>
Full article ">Figure 7
<p>Relationships identified between RENDVI and fruit count: (<b>a</b>) and (<b>b</b>) were positive for 2016 and 2017, (<b>c</b>) negative for 2020 and (<b>d</b>) non-existent for 2021.</p>
Full article ">Figure 8
<p>RF prediction of fruit count using all individual tree datasets (combined model). The different coloured points represent the sampled trees from the respective farms and regions. n = 390 represents the number of datapoints (20%) used for model validation.</p>
Full article ">Figure 9
<p>RF-based location (region) prediction of fruit count in the (<b>a</b>) Northern Territory (NT), (<b>b</b>) Northern Queensland (N–QLD) and (<b>c</b>) South East Queensland (SE–QLD). The different coloured points represent the sampled trees on a given farm in the respective regions.</p>
Full article ">Figure 10
<p>RF-based variable importance plots for models from (<b>a</b>) combined datasets, (<b>b</b>) Northern Territory (NT), (<b>c</b>) Northern Queensland (N–QLD) and (<b>d</b>) South East Queensland (SE–QLD) and the best (<b>e</b>) seasonal and (<b>f</b>) cultivar models.</p>
Full article ">Figure 11
<p>Comparison of total actual and predicted yield for the 51 validation points (blocks per season) obtained from 29 unique blocks with available actual harvest data from 2016 to 2021.</p>
Full article ">Figure 12
<p>An example of a tree-level yield variability map derived from the RF-based combined model (<b>right</b>). The RGB image of the mango orchard mapped is shown on the (<b>left</b>). The legend presents an industry-based categorization of yield variability ranging from low (0–55) to high (139–170) for this study.</p>
Full article ">
20 pages, 10555 KiB  
Article
Cloud Detection Using a UNet3+ Model with a Hybrid Swin Transformer and EfficientNet (UNet3+STE) for Very-High-Resolution Satellite Imagery
by Jaewan Choi, Doochun Seo, Jinha Jung, Youkyung Han, Jaehong Oh and Changno Lee
Remote Sens. 2024, 16(20), 3880; https://doi.org/10.3390/rs16203880 - 18 Oct 2024
Viewed by 863
Abstract
It is necessary to extract and recognize the cloud regions presented in imagery to generate satellite imagery as analysis-ready data (ARD). In this manuscript, we proposed a new deep learning model to detect cloud areas in very-high-resolution (VHR) satellite imagery by fusing two [...] Read more.
It is necessary to extract and recognize the cloud regions presented in imagery to generate satellite imagery as analysis-ready data (ARD). In this manuscript, we proposed a new deep learning model to detect cloud areas in very-high-resolution (VHR) satellite imagery by fusing two deep learning architectures. The proposed UNet3+ model with a hybrid Swin Transformer and EfficientNet (UNet3+STE) was based on the structure of UNet3+, with the encoder sequentially combining EfficientNet based on mobile inverted bottleneck convolution (MBConv) and the Swin Transformer. By sequentially utilizing convolutional neural networks (CNNs) and transformer layers, the proposed algorithm aimed to extract the local and global information of cloud regions effectively. In addition, the decoder used MBConv to restore the spatial information of the feature map extracted by the encoder and adopted the deep supervision strategy of UNet3+ to enhance the model’s performance. The proposed model was trained using the open dataset derived from KOMPSAT-3 and 3A satellite imagery and conducted a comparative evaluation with the state-of-the-art (SOTA) methods on fourteen test datasets at the product level. The experimental results confirmed that the proposed UNet3+STE model outperformed the SOTA methods and demonstrated the most stable precision, recall, and F1 score values with fewer parameters and lower complexity. Full article
(This article belongs to the Special Issue Deep Learning Techniques Applied in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Examples of images contained in the training dataset: satellite images (<b>top</b>) and labeled reference data (<b>bottom</b>) (black: clear skies; red: thick and thin clouds; green: cloud shadows).</p>
Full article ">Figure 2
<p>Test datasets for evaluating the performance of deep learning models (black: clear skies; red: thick clouds; green: thin clouds; yellow: cloud shadows).</p>
Full article ">Figure 3
<p>Architecture of UNet3+.</p>
Full article ">Figure 4
<p>Architecture of the proposed UNet3+STE model (where <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">E</mi> <mo>=</mo> <mo>[</mo> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mn>4</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>E</mi> </mrow> <mrow> <mn>5</mn> </mrow> </msub> <mo>]</mo> </mrow> </semantics></math> contains the feature map of each encoder stage and <math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">D</mi> <mo>=</mo> <mo>[</mo> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>4</mn> </mrow> </msub> <mo>]</mo> </mrow> </semantics></math> includes the feature map of each decoder stage).</p>
Full article ">Figure 5
<p>Structure of the encoder part.</p>
Full article ">Figure 6
<p>Structures of the MBConvs in UNet3+STE.</p>
Full article ">Figure 7
<p>Structure of the Swin Transformer layer.</p>
Full article ">Figure 8
<p>Examples of structures for calculating <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>D</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> in the decoder part.</p>
Full article ">Figure 9
<p>Deep supervision structures in the decoder part.</p>
Full article ">Figure 10
<p>Precision, recall, and F1 scores for each class.</p>
Full article ">Figure 11
<p>Cloud detection results produced for high-spatial-resolution (<math display="inline"><semantics> <mrow> <mn>5965</mn> <mo>×</mo> <mn>6317</mn> </mrow> </semantics></math>) images at the product level (black: clear skies; red: thick and thin clouds; green: cloud shadows).</p>
Full article ">Figure 12
<p>First-subset images (<math display="inline"><semantics> <mrow> <mn>2000</mn> <mo>×</mo> <mn>2000</mn> </mrow> </semantics></math>) of the cloud detection results produced for high-spatial-resolution (<math display="inline"><semantics> <mrow> <mn>5965</mn> <mo>×</mo> <mn>5720</mn> </mrow> </semantics></math>) images at the product level.</p>
Full article ">Figure 13
<p>Second-subset images (<math display="inline"><semantics> <mrow> <mn>2000</mn> <mo>×</mo> <mn>2000</mn> </mrow> </semantics></math>) of the cloud detection results produced for high-spatial-resolution (<math display="inline"><semantics> <mrow> <mn>5965</mn> <mo>×</mo> <mn>5073</mn> </mrow> </semantics></math>) images at the product level.</p>
Full article ">
29 pages, 6780 KiB  
Article
Phenological and Biophysical Mediterranean Orchard Assessment Using Ground-Based Methods and Sentinel 2 Data
by Pierre Rouault, Dominique Courault, Guillaume Pouget, Fabrice Flamain, Papa-Khaly Diop, Véronique Desfonds, Claude Doussan, André Chanzy, Marta Debolini, Matthew McCabe and Raul Lopez-Lozano
Remote Sens. 2024, 16(18), 3393; https://doi.org/10.3390/rs16183393 - 12 Sep 2024
Viewed by 1139
Abstract
A range of remote sensing platforms provide high spatial and temporal resolution insights which are useful for monitoring vegetation growth. Very few studies have focused on fruit orchards, largely due to the inherent complexity of their structure. Fruit trees are mixed with inter-rows [...] Read more.
A range of remote sensing platforms provide high spatial and temporal resolution insights which are useful for monitoring vegetation growth. Very few studies have focused on fruit orchards, largely due to the inherent complexity of their structure. Fruit trees are mixed with inter-rows that can be grassed or non-grassed, and there are no standard protocols for ground measurements suitable for the range of crops. The assessment of biophysical variables (BVs) for fruit orchards from optical satellites remains a significant challenge. The objectives of this study are as follows: (1) to address the challenges of extracting and better interpreting biophysical variables from optical data by proposing new ground measurements protocols tailored to various orchards with differing inter-row management practices, (2) to quantify the impact of the inter-row at the Sentinel pixel scale, and (3) to evaluate the potential of Sentinel 2 data on BVs for orchard development monitoring and the detection of key phenological stages, such as the flowering and fruit set stages. Several orchards in two pedo-climatic zones in southeast France were monitored for three years: four apricot and nectarine orchards under different management systems and nine cherry orchards with differing tree densities and inter-row surfaces. We provide the first comparison of three established ground-based methods of assessing BVs in orchards: (1) hemispherical photographs, (2) a ceptometer, and (3) the Viticanopy smartphone app. The major phenological stages, from budburst to fruit growth, were also determined by in situ annotations on the same fields monitored using Viticanopy. In parallel, Sentinel 2 images from the two study sites were processed using a Biophysical Variable Neural Network (BVNET) model to extract the main BVs, including the leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR), and fraction of green vegetation cover (FCOVER). The temporal dynamics of the normalised FAPAR were analysed, enabling the detection of the fruit set stage. A new aggregative model was applied to data from hemispherical photographs taken under trees and within inter-rows, enabling us to quantify the impact of the inter-row at the Sentinel 2 pixel scale. The resulting value compared to BVs computed from Sentinel 2 gave statistically significant correlations (0.57 for FCOVER and 0.45 for FAPAR, with respective RMSE values of 0.12 and 0.11). Viticanopy appears promising for assessing the PAI (plant area index) and FCOVER for orchards with grassed inter-rows, showing significant correlations with the Sentinel 2 LAI (R2 of 0.72, RMSE 0.41) and FCOVER (R2 0.66 and RMSE 0.08). Overall, our results suggest that Sentinel 2 imagery can support orchard monitoring via indicators of development and inter-row management, offering data that are useful to quantify production and enhance resource management. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Schematic of the three approaches used to monitor orchard development at different spatial scales throughout the year (from tree level for phenological observations to watershed level using Sentinel 2 data).</p>
Full article ">Figure 2
<p>(<b>a</b>) Locations of the monitored orchards in the Ouvèze–Ventoux watershed (green points at right) and in the La Crau area (yellow points at left), (<b>b</b>) pictures of 2 cherry orchards (13 September and 22 July 2022): top, non-grassed orchard drip-irrigated by two rows of drippers and bottom, grassed orchard drip-irrigated in summer, (<b>c</b>) pictures of 2 orchards in La Crau (top, nectarine tree in spring 22 March 2023 and bottom, in summer 26 June 2022).</p>
Full article ">Figure 3
<p>(<b>a</b>) Main steps in processing the hemispherical photographs. (<b>b</b>) The three methods of data acquisition around the central tree. (<b>c</b>) Protocol used with hemispherical photographs. (<b>d</b>) Protocol used with the Viticanopy application, with 3 trees monitored in the four directions (blue arrows). (<b>e</b>) Protocols used with the ceptometer: P1 measured in the shadow of the trees and (blue) P2 in the inter-rows (black).</p>
Full article ">Figure 4
<p>Protocol for the monitoring of the phenological stages of cherry trees. (<b>a</b>) Phenology of cherry trees according to BBCH; (<b>b</b>) at plot scale, in an orchard, three trees in red monitored by observations (BBCH scale); (<b>c</b>) at tree scale, two locations are selected to classify flowering stage in the tree; and (<b>d</b>) flowering stage of a cherry tree in April 2022.</p>
Full article ">Figure 5
<p>Comparison of temporal profiles of Sentinel 2 LAI interpolated profile (black line) and PAI obtained from the ceptometer (blue line, P2 protocol) and Viticanopy (green line) for three orchards: (<b>a</b>) 3099 (cherry—grassed—Ouvèze), (<b>b</b>) 183 (cherry—non-grassed—Ouvèze), and (<b>c</b>) 4 (nectarine—La Crau) at the beginning of 2023.</p>
Full article ">Figure 6
<p>Comparison between Sentinel 2 LAI and PAI from (<b>a</b>) ceptometer measurements taken at all orchards of the two areas (La Crau and Ouvèze), (<b>b</b>) Viticanopy measurements at all orchards, and (<b>c</b>) Viticanopy measurements excluding 2 non-grassed orchards (183, 259). The black line represents the optimal correlation 1:1; the red line represents the results from linear regression.</p>
Full article ">Figure 7
<p>(<b>a</b>)—(top graphs) Proportion of tree (orange <span class="html-italic">100*FCOVER<sub>t</sub>/FCOVER<sub>c</sub></span>, see Equation (1)) and of inter-row (green <span class="html-italic">100*((1-FCOVER<sub>t</sub>)*FCOVER<sub>g</sub>)/FCOVER<sub>c</sub></span>) components computed from hemispherical photographs used to estimate FCOVER for two dates, 22 March 2022 (doy:81) and 21 June 2022 (doy 172), for all the monitored fields. (<b>b</b>)—(bottom graphs) For two plots, left, field 183.2 and right, field 3099.1, temporal variations in proportion of tree and inter-row components for the different observation dates in 2022.</p>
Full article ">Figure 8
<p>(<b>a</b>) Averaged percentage of grass contribution on FAPAR computed from hemispherical photographs according to Equation (1) for all grassed orchard plots in 2022. Examples of Sentinel 2 FAPAR dynamics (black lines) for plots at (<b>b</b>) non-grassed site 183 and (<b>c</b>) grassed site 1418. Initial values of FAPAR, as computed from BVNET, are provided in black. The green line represents adjusted FAPAR after subtracting the grass contribution (percentage obtained from hemispherical photographs). It corresponds to FAPAR only for the trees. The percentage of grass contribution is in red.</p>
Full article ">Figure 9
<p>Correlation between (<b>a</b>) FCOVER obtained from hemispherical photographs (from Equation (1)) for all orchards of the two studied areas and FCOVER from Sentinel 2 computed with BVNET (<b>b</b>) FAPAR from hemispherical photographs and FAPAR from Sentinel 2 for all orchards and for the 3 years. (<b>c</b>) Correlation between FCOVER from Viticanopy and Sentinel 2 for all orchards for the two areas, except 183 and 259. (<b>d</b>) Correlation between FCOVER from upward-aimed hemispherical photographs and from Viticanopy for all plots.</p>
Full article ">Figure 10
<p>(<b>a</b>) LAI temporal profiles obtained from BVNET applied to Sentinel 2 data averaged at plot and field scales (field 3099) for the year 2022 and (<b>b</b>) soil water stock (in mm in blue) computed at 0–50 cm using capacitive sensors (described in <a href="#sec2dot1-remotesensing-16-03393" class="html-sec">Section 2.1</a>), with rainfall recorded at the Carpentras station (see <a href="#app1-remotesensing-16-03393" class="html-app">Supplementary Part S1 and Table S1</a>).</p>
Full article ">Figure 11
<p>Time series of FCOVER (mean value at field scale) for the cherry trees in field 3099 in Ouvèze area from 2016 to 2023.</p>
Full article ">Figure 12
<p>Sentinel 2 FAPAR evolution in 2022 for two cherry tree fields, with the date of flowering observation (in green) and the date of fruit set observation (in red) for (<b>a</b>) plot 183 (non-grassed cherry trees) and (<b>b</b>) plot 3099 (grassed cherry trees).</p>
Full article ">Figure 13
<p>Variability in dates for the phenological stages of a cherry tree orchard (plot 3099) observed in 2022.</p>
Full article ">Figure 14
<p>(<b>a</b>) Normalised FAPAR computed for all observed cherry trees relative to observation dates for BBCH stages in the Ouvèze area in 2021 for five plots. (<b>b</b>) Map of dates distinguishing between flowering and fruit set stages for 2021 obtained by thresholding FAPAR images.</p>
Full article ">
23 pages, 10725 KiB  
Article
Leveraging Geospatial Information to Map Perceived Tenure Insecurity in Urban Deprivation Areas
by Esaie Dufitimana, Jiong Wang and Divyani Kohli-Poll Jonker
Land 2024, 13(9), 1429; https://doi.org/10.3390/land13091429 - 4 Sep 2024
Viewed by 776
Abstract
Increasing tenure security is essential for promoting safe and inclusive urban development and achieving Sustainable Development Goals. However, assessment of tenure security relies on conventional census and survey statistics, which often fail to capture the dimension of perceived tenure insecurity. This perceived tenure [...] Read more.
Increasing tenure security is essential for promoting safe and inclusive urban development and achieving Sustainable Development Goals. However, assessment of tenure security relies on conventional census and survey statistics, which often fail to capture the dimension of perceived tenure insecurity. This perceived tenure insecurity is crucial as it influences local engagement and the effectiveness of policies. In many regions, particularly in the Global South, these conventional methods lack the necessary data to adequately measure perceived tenure insecurity. This study first used household survey data to derive variations in perceived tenure insecurity and then explored the potential of Very-High Resolution (VHR) satellite imagery and spatial data to assess these variations in urban deprived areas. Focusing on the city of Kigali, Rwanda, the study collected household survey data, which were analysed using Multiple Correspondence Analysis to capture variations of perceived tenure insecurity. In addition, VHR satellite imagery and spatial datasets were analysed to characterize urban deprivation. Finally, a Random Forest regression model was used to assess the relationship between variations of perceived tenure insecurity and the spatial characteristics of urban deprived areas. The findings highlight the potential of geospatial information to estimate variations in perceived tenure insecurity within urban deprived contexts. These insights can inform evidence-based decision-making by municipalities and stakeholders in urban development initiatives. Full article
(This article belongs to the Special Issue Digital Earth and Remote Sensing for Land Management)
Show Figures

Figure 1

Figure 1
<p>Map of Kigali city and the selected sites.</p>
Full article ">Figure 2
<p>Steps and process followed by the study.</p>
Full article ">Figure 3
<p>Characteristics of physical environment of neigborhoods across the study sites.</p>
Full article ">Figure 4
<p>Responses according to housing materials, building shapes, sizes, and access to basic amenities.</p>
Full article ">Figure 5
<p>Tenure rights based on land and/or property documentation, acquisition methods, and duration of occupation.</p>
Full article ">Figure 6
<p>Perceptions of respondents on tenure (in)security.</p>
Full article ">Figure 7
<p>Scatter plot of respondents in 2-dimensional space on the first and second dimension of MCA.</p>
Full article ">Figure 8
<p>Squared correlation indicators with the first dimension of MCA.</p>
Full article ">Figure 9
<p>The variation of perceived tenure insecurity across the study sites. A illustrates site of Gatsata (3), b illustrates sites of Kimisagara (2) and Gitega (1).</p>
Full article ">Figure 10
<p>Example of land cover classification results from the model (<b>Left</b>), GLCM texture features (<b>Right</b>).</p>
Full article ">Figure 11
<p>Variable importance based on image-based spatial characteristics extracted at the buffer of 20 m.</p>
Full article ">Figure 12
<p>Variable importance based on image-based spatial characteristics and additional spatial at the buffer of 25 m.</p>
Full article ">
17 pages, 12277 KiB  
Article
Is Your Training Data Really Ground Truth? A Quality Assessment of Manual Annotation for Individual Tree Crown Delineation
by Janik Steier, Mona Goebel and Dorota Iwaszczuk
Remote Sens. 2024, 16(15), 2786; https://doi.org/10.3390/rs16152786 - 30 Jul 2024
Cited by 2 | Viewed by 951
Abstract
For the accurate and automatic mapping of forest stands based on very-high-resolution satellite imagery and digital orthophotos, precise object detection at the individual tree level is necessary. Currently, supervised deep learning models are primarily applied for this task. To train a reliable model, [...] Read more.
For the accurate and automatic mapping of forest stands based on very-high-resolution satellite imagery and digital orthophotos, precise object detection at the individual tree level is necessary. Currently, supervised deep learning models are primarily applied for this task. To train a reliable model, it is crucial to have an accurate tree crown annotation dataset. The current method of generating these training datasets still relies on manual annotation and labeling. Because of the intricate contours of tree crowns, vegetation density in natural forests and the insufficient ground sampling distance of the imagery, manually generated annotations are error-prone. It is unlikely that the manually delineated tree crowns represent the true conditions on the ground. If these error-prone annotations are used as training data for deep learning models, this may lead to inaccurate mapping results for the models. This study critically validates manual tree crown annotations on two study sites: a forest-like plantation on a cemetery and a natural city forest. The validation is based on tree reference data in the form of an official tree register and tree segments extracted from UAV laser scanning (ULS) data for the quality assessment of a training dataset. The validation results reveal that the manual annotations detect only 37% of the tree crowns in the forest-like plantation area and 10% of the tree crowns in the natural forest correctly. Furthermore, it is frequent for multiple trees to be interpreted in the annotation as a single tree at both study sites. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Mapping informal settlements in Nairobi, Kenya with manual annotations. Each colored line indicates a different annotator’s delineation of the same area [<a href="#B16-remotesensing-16-02786" class="html-bibr">16</a>]: (<b>a</b>) boundary deviation due to generalization of informal settlements and (<b>b</b>) deviation resulting from inclusion or exclusion of fringe [<a href="#B26-remotesensing-16-02786" class="html-bibr">26</a>] (adapted from Elemes et al. [<a href="#B16-remotesensing-16-02786" class="html-bibr">16</a>] with permission from Kohli et al. [<a href="#B26-remotesensing-16-02786" class="html-bibr">26</a>]).</p>
Full article ">Figure 2
<p>The four validation areas (red outlines) of study site 1.</p>
Full article ">Figure 3
<p>Nadir 3D point cloud in RGB color scheme (<b>a</b>) and derived 2D segments (<b>b</b>), which represent the single tree reference data for the validation process of study site 2.</p>
Full article ">Figure 4
<p>Example annotation images with 512 × 512 pixel resolution based on the digital orthophoto (<b>a</b>) and the satellite image from WorldView-3 (<b>b</b>).</p>
Full article ">
19 pages, 5497 KiB  
Review
Earth Observation—An Essential Tool towards Effective Aquatic Ecosystems’ Management under a Climate in Change
by Filipe Lisboa, Vanda Brotas and Filipe Duarte Santos
Remote Sens. 2024, 16(14), 2597; https://doi.org/10.3390/rs16142597 - 16 Jul 2024
Viewed by 1122
Abstract
Numerous policies have been proposed by international and supranational institutions, such as the European Union, to surveil Earth from space and furnish indicators of environmental conditions across diverse scenarios. In tandem with these policies, different initiatives, particularly on both sides of the Atlantic, [...] Read more.
Numerous policies have been proposed by international and supranational institutions, such as the European Union, to surveil Earth from space and furnish indicators of environmental conditions across diverse scenarios. In tandem with these policies, different initiatives, particularly on both sides of the Atlantic, have emerged to provide valuable data for environmental management such as the concept of essential climate variables. However, a key question arises: do the available data align with the monitoring requirements outlined in these policies? In this paper, we concentrate on Earth Observation (EO) optical data applications for environmental monitoring, with a specific emphasis on ocean colour. In a rapidly changing climate, it becomes imperative to consider data requirements for upcoming space missions. We place particular significance on the application of these data when monitoring lakes and marine protected areas (MPAs). These two use cases, albeit very different in nature, underscore the necessity for higher-spatial-resolution imagery to effectively study these vital habitats. Limnological ecosystems, sensitive to ice melting and temperature fluctuations, serve as crucial indicators of a climate in change. Simultaneously, MPAs, although generally small in size, play a crucial role in safeguarding marine biodiversity and supporting sustainable marine resource management. They are increasingly acknowledged as a critical component of global efforts to conserve and manage marine ecosystems, as exemplified by Target 3 of the Kunming–Montreal Global Biodiversity Framework (GBF), which aims to effectively conserve 30% of terrestrial, inland water, coastal, and marine areas by 2030 through protected areas and other conservation measures. In this paper, we analysed different policies concerning EO data and their application to environmental-based monitoring. We also reviewed and analysed the existing relevant literature in order to find gaps that need to be bridged to effectively monitor these habitats in an ecosystem-based approach, making data more accessible, leading to the generation of water quality indicators derived from new high- and very high-resolution satellite monitoring focusing especially on Chlorophyll-a concentrations. Such data are pivotal for comprehending, at small and local scales, how these habitats are responding to climate change and various stressors. Full article
(This article belongs to the Section Biogeosciences Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Overview of the three Copernicus components: space, in situ and services. Contribution missions are not part of the space component. They can be included directly in the provision of Copernicus services.</p>
Full article ">Figure 2
<p>Overview of the resolutions available for Copernicus Space segment and contributing missions. VHR resolutions, needed for the monitoring of very small lakes and MPAs are only available through contributing missions. All satellites in blue, green and orange are privately owned.</p>
Full article ">Figure 3
<p>From the 56 ECVs, 36 (highlighted in green) rely on EO data. Some of the concerned organisations are depicted on the top right of each ECV. The concerned organisations are given as examples since many more are involved in providing products for each ECV. For example, nine organisations are involved in providing data for the Precipitation ECV. Only NASA manages the Lightening ECV and only ESA is responsible for the Ocean Colour ECV. The star represents a future engagement of ESA in the Permafrost ECV.</p>
Full article ">Figure 4
<p>Results from the Web of Science query.</p>
Full article ">Figure 5
<p>VOSviewer map of co-occurrences in the 8054 abstracts analysed, showing how lake research and satellite remote sensing are close subjects.</p>
Full article ">Figure 6
<p>VOSviewer map of co-occurrences in the 231 abstracts analysed, showing how marine protected areas and satellite remote sensing are not co-occurring in the scientific literature. The map can be accessed through the QR code with the possibility to explore the clusters.</p>
Full article ">
24 pages, 96595 KiB  
Article
Modified ESRGAN with Uformer for Video Satellite Imagery Super-Resolution
by Kinga Karwowska and Damian Wierzbicki
Remote Sens. 2024, 16(11), 1926; https://doi.org/10.3390/rs16111926 - 27 May 2024
Viewed by 1112
Abstract
In recent years, a growing number of sensors that provide imagery with constantly increasing spatial resolution are being placed on the orbit. Contemporary Very-High-Resolution Satellites (VHRS) are capable of recording images with a spatial resolution of less than 0.30 m. However, until now, [...] Read more.
In recent years, a growing number of sensors that provide imagery with constantly increasing spatial resolution are being placed on the orbit. Contemporary Very-High-Resolution Satellites (VHRS) are capable of recording images with a spatial resolution of less than 0.30 m. However, until now, these scenes were acquired in a static way. The new technique of the dynamic acquisition of video satellite imagery has been available only for a few years. It has multiple applications related to remote sensing. However, in spite of the offered possibility to detect dynamic targets, its main limitation is the degradation of the spatial resolution of the image that results from imaging in video mode, along with a significant influence of lossy compression. This article presents a methodology that employs Generative Adversarial Networks (GAN). For this purpose, a modified ESRGAN architecture is used for the spatial resolution enhancement of video satellite images. In this solution, the GAN network generator was extended by the Uformer model, which is responsible for a significant improvement in the quality of the estimated SR images. This enhances the possibilities to recognize and detect objects significantly. The discussed solution was tested on the Jilin-1 dataset and it presents the best results for both the global and local assessment of the image (the mean values of the SSIM and PSNR parameters for the test data were, respectively, 0.98 and 38.32 dB). Additionally, the proposed solution, in spite of the fact that it employs artificial neural networks, does not require a high computational capacity, which means it can be implemented in workstations that are not equipped with graphic processors. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Diagram of enhancement of spatial resolution of a single video frame.</p>
Full article ">Figure 2
<p>Discriminator model [<a href="#B58-remotesensing-16-01926" class="html-bibr">58</a>].</p>
Full article ">Figure 3
<p>The flowchart of the algorithm.</p>
Full article ">Figure 4
<p>Examples of images from test data with quality results are shown in <a href="#remotesensing-16-01926-t003" class="html-table">Table 3</a>: (<b>a</b>) HR image, (<b>b</b>) MCWESRGAN with Uformer, (<b>c</b>) MCWESRGAN with Lucy–Richardson Algorithm, and (<b>d</b>) MCWESRGAN with Wiener deconvolution.</p>
Full article ">Figure 5
<p>Structural similarity between the estimated images (tiles) (SR) and the reference HR images.</p>
Full article ">Figure 6
<p>Peak signal-to-noise ratio (PSNR [dB]) between the estimated images (tiles) (SR) and the reference HR images.</p>
Full article ">Figure 7
<p>Local assessment—SSIM metrics (for the evaluated field of the size of 20 × 20 pixels).</p>
Full article ">Figure 8
<p>Local assessment—PSNR metrics (for the evaluated field of the size of 20 × 20 pixels).</p>
Full article ">Figure 9
<p>PSD diagram on the x and y directions for a sample image.</p>
Full article ">Figure 10
<p>Images in the frequency domain.</p>
Full article ">
23 pages, 6492 KiB  
Article
Extraction of Water Bodies from High-Resolution Aerial and Satellite Images Using Visual Foundation Models
by Samed Ozdemir, Zeynep Akbulut, Fevzi Karsli and Taskin Kavzoglu
Sustainability 2024, 16(7), 2995; https://doi.org/10.3390/su16072995 - 3 Apr 2024
Cited by 2 | Viewed by 2852
Abstract
Water, indispensable for life and central to ecosystems, human activities, and climate dynamics, requires rapid and accurate monitoring. This is vital for sustaining ecosystems, enhancing human welfare, and effectively managing land, water, and biodiversity on both the local and global level. In the [...] Read more.
Water, indispensable for life and central to ecosystems, human activities, and climate dynamics, requires rapid and accurate monitoring. This is vital for sustaining ecosystems, enhancing human welfare, and effectively managing land, water, and biodiversity on both the local and global level. In the rapidly evolving domain of remote sensing and deep learning, this study focuses on water body extraction and classification through the use of recent deep learning models of visual foundation models (VFMs). Specifically, the Segment Anything Model (SAM) and Contrastive Language-Image Pre-training (CLIP) models have shown promise in semantic segmentation, dataset creation, change detection, and instance segmentation tasks. A novel two-step approach involving segmenting images via the Automatic Mask Generator method of the SAM and the zero-shot classification of segments using CLIP is proposed, and its effectiveness is tested on water body extraction problems. The proposed methodology was applied to both remote sensing imagery acquired from LANDSAT 8 OLI and very high-resolution aerial imagery. Results revealed that the proposed methodology accurately delineated water bodies across complex environmental conditions, achieving a mean intersection over union (IoU) of 94.41% and an F1 score of 96.97% for satellite imagery. Similarly, for the aerial imagery dataset, the proposed methodology achieved a mean IoU of 90.83% and an F1 score exceeding 94.56%. The high accuracy achieved in selecting segments predominantly classified as water highlights the effectiveness of the proposed model in intricate environmental image analysis. Full article
Show Figures

Figure 1

Figure 1
<p>Eight pseudo-color sample tiles from the YTU-Waternet test dataset, generated using blue, red and NIR bands from Landsat 8 OLI imagery.</p>
Full article ">Figure 2
<p>The study area in the Rize province of Turkey. Tile boundaries are represented in red and the respective tile numbers are given in the upper left of the tile.</p>
Full article ">Figure 3
<p>The study area located in the Malatya province of Turkey. Tile boundaries are represented in red and the respective tile numbers are given in the upper left of the tile.</p>
Full article ">Figure 4
<p>Flowchart of water body extraction with VFMs.</p>
Full article ">Figure 5
<p>The framework of the Segment Anything Model (SAM) (adapted from Kirillov et al., 2023) [<a href="#B45-sustainability-16-02995" class="html-bibr">45</a>].</p>
Full article ">Figure 6
<p>An illustration of the CLIP model adapted from Radford et al. (2021) [<a href="#B46-sustainability-16-02995" class="html-bibr">46</a>].</p>
Full article ">Figure 7
<p>Comparative analysis of the CLIP RSICD model for different prompts on various image segments.</p>
Full article ">Figure 8
<p>Results for five sample test areas from the YTU-Waternet dataset. Segments extracted with SAM are delineated with blue boundaries and filled with distinct, randomly assigned colors to ensure clear differentiation.</p>
Full article ">Figure 9
<p>In-depth analysis of segmentation results of the SAM + CLIP RSICD framework for YTU-Waternet dataset. Segments extracted by SAM are outlined in blue and filled with a pale blue color.</p>
Full article ">Figure 10
<p>Segmentation results of the proposed framework on the Malatya study area. Segments identified using SAM have blue boundaries and are filled with random colors for clear differentiation.</p>
Full article ">Figure 11
<p>Segmentation results of the proposed framework on the Malatya study area. Note that the green lines represent the extracted water body boundaries, while the tile boundaries are outlined in red, and the respective tile numbers are displayed in the upper left corner of each tile.</p>
Full article ">Figure 12
<p>Comparison of the SAM segmentation results in Tile 3 before and after the adjustment of the <span class="html-italic">stability score offset</span> parameter to 0.1. Segments extracted by SAM are outlined with blue boundaries and filled with random colors for clear differentiation.</p>
Full article ">Figure 13
<p>Results for the Rize study area. Note that the green lines represent the extracted boundaries of the water bodies, while the tile boundaries are outlined in red, and the respective tile numbers are displayed in the upper left corner of each tile.</p>
Full article ">Figure 14
<p>Segmentation results for Tile 10 and 11 using the proposed framework on the Rize dataset. Segments extracted by SAM are outlined with blue boundaries and filled with random colors for clear differentiation.</p>
Full article ">
29 pages, 11112 KiB  
Article
Analysing the Relationship between Spatial Resolution, Sharpness and Signal-to-Noise Ratio of Very High Resolution Satellite Imagery Using an Automatic Edge Method
by Valerio Pampanoni, Fabio Fascetti, Luca Cenci, Giovanni Laneve, Carla Santella and Valentina Boccia
Remote Sens. 2024, 16(6), 1041; https://doi.org/10.3390/rs16061041 - 15 Mar 2024
Cited by 4 | Viewed by 2083
Abstract
Assessing the performance of optical imaging systems is crucial to evaluate their capability to satisfy the product requirements for an Earth Observation (EO) mission. In particular, the evaluation of image quality is undoubtedly one of the most important, critical and problematic aspects of [...] Read more.
Assessing the performance of optical imaging systems is crucial to evaluate their capability to satisfy the product requirements for an Earth Observation (EO) mission. In particular, the evaluation of image quality is undoubtedly one of the most important, critical and problematic aspects of remote sensing. It involves not only pre-flight analyses, but also continuous monitoring throughout the operational lifetime of the observing system. The Ground Sampling Distance (GSD) of the imaging system is often the only parameter used to quantify its spatial resolution, i.e., its capability to resolve objects on the ground. In practice, this feature is also heavily influenced by other image quality parameters such as the image sharpness and Signal-to-Noise Ratio (SNR). However, these last two aspects are often analysed separately, using unrelated methodologies, complicating the image quality assessment and posing standardisation issues. To this end, we expanded the features of our Automatic Edge Method (AEM), which was originally developed to simplify and automate the estimate of sharpness metrics, to also extract the image SNR. In this paper we applied the AEM to a wide range of optical satellite images characterised by different GSD and Pixel Size (PS) with the objective to explore the nature of the relationship between the components of overall image quality (image sharpness, SNR) and product geometric resampling (expressed in terms of GSD/PS ratio). Our main objective is to quantify how the sharpness and the radiometric quality of an image product are affected by different product geometric resampling strategies, i.e., by distributing imagery with a PS larger or smaller than the GSD of the imaging system. The AEM allowed us to explore this relationship by relying on a vast amount of data points, which provide a robust statistical significance to the results expressed in terms of sharpness metrics and SNR means. The results indicate the existence of a direct relationship between the product geometric resampling and the overall image quality, and also highlight a good degree of correlation between the image sharpness and SNR. Full article
(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)
Show Figures

Figure 1

Figure 1
<p>Fitted Edge Spread Function and edge Signal-to-Noise Ratio. The estimation of the edge SNR is performed using the definition provided by [<a href="#B19-remotesensing-16-01041" class="html-bibr">19</a>,<a href="#B24-remotesensing-16-01041" class="html-bibr">24</a>]. The black horizontal lines show the width of the portions of the sides of the Edge Spread Function used for the purpose, which are calculated automatically using the positions which correspond to the 10% values of the Line Spread Function in its ascending and descending halves [<a href="#B20-remotesensing-16-01041" class="html-bibr">20</a>].</p>
Full article ">Figure 2
<p>Spatial distribution of the considered VHR_IMAGE_2021 products.</p>
Full article ">Figure 3
<p>FWHM (triangles) and SNR (squares) values averaged for each band of the products reported in <a href="#remotesensing-16-01041-t001" class="html-table">Table 1</a>. The standard deviation of each FWHM value is represented by the error bar centred on each triangle. The horizontal dashed red line represents the boundary between balanced and blurry images according to [<a href="#B13-remotesensing-16-01041" class="html-bibr">13</a>]. BLUE, GREEN, RED and NIR bands are shown from top to bottom.</p>
Full article ">Figure 4
<p>FWHM (blue triangles) and SNR (black squares) values averaged for each product reported in <a href="#remotesensing-16-01041-t001" class="html-table">Table 1</a>. The standard deviation of each FWHM values is represented by the error bar over each triangle. The horizontal dashed red line represents the boundary between balanced and blurry images according to [<a href="#B13-remotesensing-16-01041" class="html-bibr">13</a>].</p>
Full article ">Figure 5
<p>Relationship between the FWHM (<span class="html-italic">x</span>-axis) and SNR (<span class="html-italic">y</span>-axis) mean values evaluated for each of the considered VHR_IMAGE_2021 products. Each product is shown with a different colour as indicated in the colour bar at the right side of the figure.</p>
Full article ">Figure 6
<p>Relationship between GSD/PS ratio (<span class="html-italic">x</span>-axis) and FWHM mean values (<span class="html-italic">y</span>-axis) evaluated for each of the considered VHR_IMAGE_2021 products.</p>
Full article ">Figure 7
<p>Relationship between GSD/PS ratio (<span class="html-italic">x</span>-axis) and SNR mean values (<span class="html-italic">y</span>-axis) evaluated for each of the considered VHR_IMAGE_2021 products.</p>
Full article ">Figure 8
<p>Detail of an agricultural area imaged by the RED (<b>top</b>) and NIR (<b>bottom</b>) bands of a SP06_04 product. Scale 1:6000. The NIR band appears noticeably blurrier than the RED band.</p>
Full article ">Figure 9
<p>Zoom over common area imaged by SV11_T01 (<b>top</b>) and GY01_T01 (<b>bottom</b>) products. Scale 1:3000.</p>
Full article ">Figure 10
<p>Zoom over common area imaged by SV12_T02 (<b>top</b>) and KS03_T02 (<b>bottom</b>) products. Scale 1:4000.</p>
Full article ">Figure 11
<p>Zoom over common area imaged by SV14_T01 (<b>top</b>) and EW02_T04 (<b>bottom</b>) products. Scale 1:4000.</p>
Full article ">Figure 12
<p>Relationship between GSD/PS ratio (<span class="html-italic">x</span>-axis) and FWHM mean values (<span class="html-italic">y</span>-axis) for the considered VHR_IMAGE_2021 (blue triangles) and Landsat/Sentinel-2 datasets (amber triangles).</p>
Full article ">Figure 13
<p>Relationship between GSD/PS ratio (<span class="html-italic">x</span>-axis) and SNR mean values (<span class="html-italic">y</span>-axis) for the considered VHR_IMAGE_2021 (blue triangles) and Landsat/Sentinel-2 datasets (amber triangles).</p>
Full article ">
19 pages, 12144 KiB  
Article
Effects of the Construction of Granadilla Industrial Port in Seagrass and Seaweed Habitats Using Very-High-Resolution Multispectral Satellite Imagery
by Antonio Mederos-Barrera, José Sevilla, Javier Marcello, José María Espinosa and Francisco Eugenio
Remote Sens. 2024, 16(6), 945; https://doi.org/10.3390/rs16060945 - 8 Mar 2024
Viewed by 1353
Abstract
Seagrass and seaweed meadows hold a very important role in coastal and marine ecosystems. However, anthropogenic impacts pose risks to these delicate habitats. This paper analyses the multitemporal impact of the construction of the largest industrial port in the Canary Islands, near the [...] Read more.
Seagrass and seaweed meadows hold a very important role in coastal and marine ecosystems. However, anthropogenic impacts pose risks to these delicate habitats. This paper analyses the multitemporal impact of the construction of the largest industrial port in the Canary Islands, near the Special Area of Conservation Natura 2000, on Cymodocea nodosa seagrass meadows (sebadales) of the South of Tenerife, in the locality of Granadilla (Canary Islands, Spain). Very-high-resolution WorldView-2 multispectral satellite data were used for the analysis. Specifically, three images were selected before, during, and after the construction of the port (2011, 2014, and 2022, correspondingly). Initially, advanced pre-processing of the images was performed, and then seabed maps were obtained using the machine learning K-Nearest Neighbors (KNN) supervised classification model, discriminating 12 different bottom types in Case-2 complex waters. The maps achieved high-quality metrics with Precision values of 85%, 81%, and 80%, recall of 76%, 77%, and 77%, and F1 scores of 80%, 79%, and 77% for 2011, 2014, and 2022, respectively. The results mainly show that the construction directly affected the seagrass and seaweed habitats. In particular, the impact of the port on the meadows of Cymodocea nodosa, Caulerpa prolifera, and maërl was assessed. The total maërl population was reduced by 1.9 km2 throughout the study area. However, the Cymodocea nodosa population was maintained at the cost of colonizing maërl areas. Furthermore, the port sedimented a total of 0.98 km2 of seabed, especially Cymodocea nodosa and maërl. In addition, it was observed that Caulerpa prolifera was established as a meadow at the entrance of the port, replacing part of the Cymodocea nodosa and maërl areas. As additional results, bathymetric maps were generated from satellite imagery with the Sigmoid model, and the presence of a submarine outfall was, as well, presented. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Granadilla Port: (<b>a</b>) Granadilla coast in Tenerife Island, Canary Islands (Google Earth ©), and (<b>b</b>) study area.</p>
Full article ">Figure 2
<p>Satellite imagery of the study area: (<b>a</b>) 18 September 2011, (<b>b</b>) 22 September 2014, and (<b>c</b>) 23 October 2022.</p>
Full article ">Figure 3
<p>Examples of (<b>a</b>) dense <span class="html-italic">Cymodocea nodosa</span>, (<b>b</b>) pure <span class="html-italic">maërl</span>, and (<b>c</b>) <span class="html-italic">Caulerpa prolifera</span> in the study area.</p>
Full article ">Figure 4
<p>Distribution of sample pixels in relation to depth for each seabed type and image: (<b>a</b>) 18 September 2011, (<b>b</b>) 22 September 2014, and (<b>c</b>) 23 October 2022.</p>
Full article ">Figure 5
<p>Methodology for the generation of seabed type mapping.</p>
Full article ">Figure 6
<p>Results of the sunglint correction on the image of 18 September 2011: (<b>a</b>) image without the application of the sunglint correction and (<b>b</b>) image with the Hedley correction.</p>
Full article ">Figure 7
<p>Results of the banding correction on the image of 3 October 2022: (<b>a</b>) image without the banding correction and (<b>b</b>) image with the banding correction. In both images, the brightness has been increased by 20% and the contrast by 40% to highlight the impact of the pre-processing.</p>
Full article ">Figure 8
<p>Pre-processed images for (<b>a</b>) 18 September 2011, (<b>b</b>) 22 September 2014, and (<b>c</b>) 3 October 2022.</p>
Full article ">Figure 9
<p>Results of the seabed type maps obtained with the KNN model for (<b>a</b>) 18 September 2011, (<b>b</b>) 22 September 2014, and (<b>c</b>) 3 October 2022.</p>
Full article ">Figure 10
<p>Evolution of the seabed types: (<b>a</b>) maps of 2011 and 2022 for the port area and (<b>b</b>) graph of surface changes for the entire area.</p>
Full article ">Figure 11
<p>Bathymetric maps obtained with the Sigmoid model for (<b>a</b>) 18 September 2011, (<b>b</b>) 22 September 2014, and (<b>c</b>) 3 October 2022.</p>
Full article ">Figure 12
<p>Unauthorized submarine outfall emitting urban human sewage in the 3 October 2022 image: (<b>a</b>) bathymetric map showing the abnormal shallow depth due to the outfall (the area is highlighted in a red rectangle), and (<b>b</b>) outfall emission in the original image.</p>
Full article ">
17 pages, 14859 KiB  
Article
Remotely Sensed and Field Data for Geomorphological Analysis of Water Springs: A Case Study of Ain Maarrouf
by Anselme Muzirafuti
Geosciences 2024, 14(2), 51; https://doi.org/10.3390/geosciences14020051 - 10 Feb 2024
Viewed by 1872
Abstract
Tabular Middle Atlas of Morocco holds the main water reservoir that serves many cities across Morocco. Dolomite and limestone are the most dominant geologic formations in this region in which water resources are contained. The recent studies conducted to evaluate the quality of [...] Read more.
Tabular Middle Atlas of Morocco holds the main water reservoir that serves many cities across Morocco. Dolomite and limestone are the most dominant geologic formations in this region in which water resources are contained. The recent studies conducted to evaluate the quality of this water suggest that it is very vulnerable to pollutants resulting from both anthropogenic and natural phenomenon. High and very high-resolution satellite imagery have been used in an attempt to gain a better understanding of this karstic system and suggest a strategy for its protection in order to reduce the impact of these phenomenon. Based on the surface reflectance of land cover benchmarks, the karstic system has been horizontally delineated, as well as regions with intense human activities. Using band combination in the portion of the infrared, shortwave infrared, and visible parts of the electromagnetic spectrum, we identified bare lands which have been interpreted as carbonate rocks, clay minerals, uncultivated fields, basalts rocks, and built-up areas. Other classes such as water and vegetation have been identified. Carbonate rocks have been identified as areas with a high rate of water infiltration through their fracture system. Using a Sobel operator filter, these fractures have been mapped and their results have revealed new and existing faults in two major fracture directions, NE-SW and NW-SE, where NE-SW is the preferable pathway for surface water infiltration towards the groundwater reservoir, while the NW-SE direction drains groundwater from the Cause to the basin of Saiss. Over time, the infiltration of surface water through fractures has contributed to a gradual erosion of the carbonate rocks, which in turn developed karst landforms. This karst system is vulnerable due to the flow of pollutants in areas with shallow sinkholes. Using GDEM imagery, we extracted karst depressions, and their analysis shows that they are distributed along the fracture system and many of them were located on curvilinear or linear axes along the NE-SW fracture direction. We found also dolines scattered in areas with a high intensity of fractures. This distribution has been validated by both on-the-ground measurements and very high-resolution satellite images, and depressions of different forms and shapes dominated by dolines, poljes, lapiez, and avens have been identified. We also found many water springs with a highly important water output, such as the Ain Maarrouf water spring. The aim of this study is to enhance the understanding of the hydrogeological system of TMA, to improve the existence of the fracture database in the Cause of Agourai, and to establish a new morpho-structural picture of the Ain Maarrouf water spring. Full article
Show Figures

Figure 1

Figure 1
<p>Location of on-ground site 14 for analysis of the faults and fractures in the contact region between the basin and the Cause of Agourai.</p>
Full article ">Figure 2
<p>Methodological workflow for data analysis, modified from Muzirafuti, 2018 [<a href="#B32-geosciences-14-00051" class="html-bibr">32</a>].</p>
Full article ">Figure 3
<p>Spectral signature of land cover objects based on their reflectance (dimensionless) at different electromagnetic spectrum wavelengths (nm). Extracted from Copernicus Sentinel-2 image acquired on 30 March 2017 in Tabular Middle Atlas, modified from Muzirafuti, 2018 [<a href="#B32-geosciences-14-00051" class="html-bibr">32</a>].</p>
Full article ">Figure 4
<p>Spatial (<b>D</b>) and surface (<b>C</b>) profiles extracted from a high percentage of principle component analysis results (<b>B</b>) in an area occupied by carbonate rocks (<b>A</b>). From a Copernicus Sentinel-2 image acquired on 30 March 2017 in Tabular Middle Atlas, modified from Muzirafuti, 2018 [<a href="#B32-geosciences-14-00051" class="html-bibr">32</a>]. High pixel values are observed on unaltered dolomite, while low values were identified in an altered part of the carbonate rock due to the presence of gradual erosional clay (Terra-rossa) as indicated by the red arrows.</p>
Full article ">Figure 5
<p>Lineaments extracted from a Copernicus Sentinel-2 image acquired in the Cause of Agourai. With numbers indicating the position of the on-ground sites of the measurements of the fractures.</p>
Full article ">Figure 6
<p>On-ground measurements at the contact between the Cause of Agourai and the Saiss basin. Analyses were conducted on site 14 as shown in <a href="#geosciences-14-00051-f005" class="html-fig">Figure 5</a>, modified from Muzirafuti, 2018 [<a href="#B32-geosciences-14-00051" class="html-bibr">32</a>].</p>
Full article ">Figure 7
<p>New and existing main faults extracted from satellite images in the Cause of Agourai.</p>
Full article ">Figure 8
<p>Location of the Ain Amejrar water spring located in the contact region between the TMA and the Central Massif of Morocco.</p>
Full article ">Figure 9
<p>Flexural-slip Anticline-fault on-ground measurement around the Ain Amejrar water spring located in the contact region between the TMA and the Central Massif of Morocco, as shown from the site 7 in <a href="#geosciences-14-00051-f008" class="html-fig">Figure 8</a>, modified from Muzirafuti, 2019 [<a href="#B33-geosciences-14-00051" class="html-bibr">33</a>].</p>
Full article ">Figure 10
<p>Lineament extracted from a Copernicus Sentinel-2 image acquired in the Tabular Middle Atlas with number indicating the positions of the on-ground fracture measurements around the water springs of Ain Bittit, Ain Ribaa, Ain Atrous and Ain Aguemguem.</p>
Full article ">Figure 11
<p>The direction of fractures (<b>C</b>) extracted from carbonate rocks (<b>B</b>) around the water spring of Ain Bittit (<b>A</b>). Measurements were conducted at the site 1 as shown in <a href="#geosciences-14-00051-f010" class="html-fig">Figure 10</a>, modified from Muzirafuti, 2018 [<a href="#B32-geosciences-14-00051" class="html-bibr">32</a>].</p>
Full article ">Figure 12
<p>Direction of fractures extracted from carbonated rocks around Ain Ribaa. Measurements were conducted on site 2 as shown in <a href="#geosciences-14-00051-f010" class="html-fig">Figure 10</a>, modified from Muzirafuti, 2018 [<a href="#B32-geosciences-14-00051" class="html-bibr">32</a>].</p>
Full article ">Figure 13
<p>The direction of fractures extracted from carbonated rocks around Ain Agueguem. Measurements were conducted on site 3 as shown in <a href="#geosciences-14-00051-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 14
<p>Hydrogeological conceptual model of the water springs located in the contact region between the Tabular Middle Atlas and the Saiss basin.</p>
Full article ">Figure 15
<p>Water springs and lineaments extracted from the Copernicus Sentinel-2 image acquired in the Tabular Middle Atlas.</p>
Full article ">Figure 16
<p>Karst landforms susceptibility model of the Tabular Middle Atlas. Modified from Muzirafuti, 2018 [<a href="#B32-geosciences-14-00051" class="html-bibr">32</a>].</p>
Full article ">
Back to TopTop