[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (216)

Search Parameters:
Keywords = large scene remote sensing images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 5897 KiB  
Article
A Large-Scale Building Unsupervised Extraction Method Leveraging Airborne LiDAR Point Clouds and Remote Sensing Images Based on a Dual P-Snake Model
by Zeyu Tian, Yong Fang, Xiaohui Fang, Yan Ma and Han Li
Sensors 2024, 24(23), 7503; https://doi.org/10.3390/s24237503 - 25 Nov 2024
Viewed by 349
Abstract
Automatic large-scale building extraction from the LiDAR point clouds and remote sensing images is a growing focus in the fields of the sensor applications and remote sensing. However, this building extraction task remains highly challenging due to the complexity of building sizes, shapes, [...] Read more.
Automatic large-scale building extraction from the LiDAR point clouds and remote sensing images is a growing focus in the fields of the sensor applications and remote sensing. However, this building extraction task remains highly challenging due to the complexity of building sizes, shapes, and surrounding environments. In addition, the discreteness, sparsity, and irregular distribution of point clouds, lighting, and shadows, as well as occlusions of the images, also seriously affect the accuracy of building extraction. To address the above issues, we propose a new unsupervised building extraction algorithm PBEA (Point and Pixel Building Extraction Algorithm) based on a new dual P-snake model (Dual Point and Pixel Snake Model). The proposed dual P-snake model is an enhanced active boundary model, which uses both point clouds and images simultaneously to obtain the inner and outer boundaries. The proposed dual P-snake model enables interaction and convergence between the inner and outer boundaries to improve the performance of building boundary detection, especially in complex scenes. Using the dual P-snake model and polygonization, this proposed PBEA can accurately extract large-scale buildings. We evaluated our PBEA and dual P-snake model on the ISPRS Vaihingen dataset and the Toronto dataset. The experimental results show that our PBEA achieves an area-based quality evaluation metric of 90.0% on the Vaihingen dataset and achieves the area-based quality evaluation metric of 92.4% on the Toronto dataset. Compared with other methods, our method demonstrates satisfactory performance. Full article
(This article belongs to the Special Issue Object Detection via Point Cloud Data)
Show Figures

Figure 1

Figure 1
<p>Flow chart of PBEA.</p>
Full article ">Figure 2
<p>Building boundary extraction from the point clouds: (<b>a</b>) Cluster results of FEC algorithm. (<b>b</b>) Boundary extracted by the boundary algorithm. (<b>c</b>) Projection onto the image.</p>
Full article ">Figure 3
<p>Building boundary extracted from the image: (<b>a</b>) Building boundary extracted by the edge closure algorithm, where white lines represent the canny edges and red lines indicate the extended closed edges. (<b>b</b>) Building boundary constrained by the point cloud projection boundary.</p>
Full article ">Figure 4
<p>Building boundary extracted by the dual P-snake model: (<b>a</b>) Comparison of the initial inner and outer boundaries. (<b>b</b>) Comparison among the initial inner boundary, initial outer boundary, and boundary extracted by the dual P-snake model. (<b>c</b>) Boundary extracted by the dual P-snake model.</p>
Full article ">Figure 5
<p>Influence of the sparse region of the point clouds on the building extraction: (<b>a</b>) Remote sensing image of the building. (<b>b</b>) Building boundary extracted from the sparse region of the point clouds. (<b>c</b>) Building boundary extracted from the image. (<b>d</b>) Building boundary extracted by the dual P-snake model.</p>
Full article ">Figure 5 Cont.
<p>Influence of the sparse region of the point clouds on the building extraction: (<b>a</b>) Remote sensing image of the building. (<b>b</b>) Building boundary extracted from the sparse region of the point clouds. (<b>c</b>) Building boundary extracted from the image. (<b>d</b>) Building boundary extracted by the dual P-snake model.</p>
Full article ">Figure 6
<p>Influence of the shadow region of the image on the building extraction: (<b>a</b>) Remote sensing image of the building. (<b>b</b>) Building boundary extracted from the shadow region of the image. (<b>c</b>) Building boundary extracted from the point clouds. (<b>d</b>) Building boundary extracted by the dual P-snake model.</p>
Full article ">Figure 7
<p>Influence of the occlusion regions of the image on the building extraction: (<b>a</b>) Remote sensing image of the building, and the lower right corner of the building is obscured by trees and shadows. (<b>b</b>) Building boundary extracted from the occlusion region of the image. (<b>c</b>) Building boundary extracted from the point clouds. (<b>d</b>) Building boundary extracted by the dual P-snake model.</p>
Full article ">Figure 7 Cont.
<p>Influence of the occlusion regions of the image on the building extraction: (<b>a</b>) Remote sensing image of the building, and the lower right corner of the building is obscured by trees and shadows. (<b>b</b>) Building boundary extracted from the occlusion region of the image. (<b>c</b>) Building boundary extracted from the point clouds. (<b>d</b>) Building boundary extracted by the dual P-snake model.</p>
Full article ">Figure 8
<p>Results of the building boundary polygonization.</p>
Full article ">Figure 9
<p>Performance of the snake models on buildings with varying complexity: (<b>a</b>) Simple rectangular building. (<b>b</b>) L-shaped building. (<b>c</b>) Building with the complex boundary.</p>
Full article ">Figure 10
<p>Buildings extracted from the Vaihingen dataset by our PBEA: (<b>a</b>) Area 1, (<b>b</b>) Area 2, (<b>c</b>) Area 3, (<b>d</b>) extraction results of Area 1, (<b>e</b>) extraction results of Area 2 (<b>f</b>) extraction results of Area 3.</p>
Full article ">Figure 10 Cont.
<p>Buildings extracted from the Vaihingen dataset by our PBEA: (<b>a</b>) Area 1, (<b>b</b>) Area 2, (<b>c</b>) Area 3, (<b>d</b>) extraction results of Area 1, (<b>e</b>) extraction results of Area 2 (<b>f</b>) extraction results of Area 3.</p>
Full article ">Figure 11
<p>Buildings extracted from the Toronto dataset by our PBEA: (<b>a</b>) Area 4, (<b>b</b>) Area 5, (<b>c</b>) extraction results of Area 4, (<b>d</b>) extraction results of Area 5.</p>
Full article ">
25 pages, 20123 KiB  
Article
EDWNet: A Novel Encoder–Decoder Architecture Network for Water Body Extraction from Optical Images
by Tianyi Zhang, Wenbo Ji, Weibin Li, Chenhao Qin, Tianhao Wang, Yi Ren, Yuan Fang, Zhixiong Han and Licheng Jiao
Remote Sens. 2024, 16(22), 4275; https://doi.org/10.3390/rs16224275 - 16 Nov 2024
Viewed by 889
Abstract
Automated water body (WB) extraction is one of the hot research topics in the field of remote sensing image processing. To address the challenges of over-extraction and incomplete extraction in complex water scenes, we propose an encoder–decoder architecture semantic segmentation network for high-precision [...] Read more.
Automated water body (WB) extraction is one of the hot research topics in the field of remote sensing image processing. To address the challenges of over-extraction and incomplete extraction in complex water scenes, we propose an encoder–decoder architecture semantic segmentation network for high-precision extraction of WBs called EDWNet. We integrate the Cross-layer Feature Fusion (CFF) module to solve difficulties in segmentation of WB edges, utilizing the Global Attention Mechanism (GAM) module to reduce information diffusion, and combining with the Deep Attention Module (DAM) module to enhance the model’s global perception ability and refine WB features. Additionally, an auxiliary head is incorporated to optimize the model’s learning process. In addition, we analyze the feature importance of bands 2 to 7 in Landsat 8 OLI images, constructing a band combination (RGB 763) suitable for algorithm’s WB extraction. When we compare EDWNet with various other semantic segmentation networks, the results on the test dataset show that EDWNet has the highest accuracy. EDWNet is applied to accurately extract WBs in the Weihe River basin from 2013 to 2021, and we quantitatively analyzed the area changes of the WBs during this period and their causes. The results show that EDWNet is suitable for WB extraction in complex scenes and demonstrates great potential in long time-series and large-scale WB extraction. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial location and scope of the Weihe River Basin study area in the People’s Republic of China: (<b>a</b>) is the administrative divisions of China, (<b>b</b>) is the true color Landsat 8 OLI image of the study area, (<b>c</b>) is the image before pansharpening in a randomly selected area, and (<b>d</b>) is the image after pansharpening in a randomly selected area.</p>
Full article ">Figure 2
<p>EDWNet model structure.</p>
Full article ">Figure 3
<p>CFF module structure.</p>
Full article ">Figure 4
<p>DAM module structure.</p>
Full article ">Figure 5
<p>GAM module structure.</p>
Full article ">Figure 6
<p>SHAP values of bands 2 to 7 in Landsat 8 OLI images.</p>
Full article ">Figure 7
<p>Validation loss of EDWNet in different band combination images.</p>
Full article ">Figure 8
<p>Classification results of WBs using different methods: (<b>a</b>–<b>c</b>) the scenario with small WBs, (<b>d</b>,<b>e</b>) the scenario with a reservoir, (<b>f</b>) the scenario with a wide river channel, (<b>g</b>) the scenario with shadows of hills. The yellow dotted line indicates WBs misclassified as background, while the red dotted line indicates pixels misclassified as WBs.</p>
Full article ">Figure 9
<p>Spatial distribution of the main stream of the “Xi’an-Xianyang” section in the Weihe River Basin.</p>
Full article ">Figure 10
<p>Results of river width extraction using different methods in the Weihe River “Xi’an-Xianyang” section: (<b>a</b>) line graph of river width extracted using different methods at different longitudes, and (<b>b</b>) difference between river width extracted using different methods and true width.</p>
Full article ">Figure 10 Cont.
<p>Results of river width extraction using different methods in the Weihe River “Xi’an-Xianyang” section: (<b>a</b>) line graph of river width extracted using different methods at different longitudes, and (<b>b</b>) difference between river width extracted using different methods and true width.</p>
Full article ">Figure 11
<p>Scatter plot of label river width and extracted river width in different methods.</p>
Full article ">Figure 12
<p>Extraction maps of the Weihe River Basin in 2014, 2016, 2018, and 2020. The left side are the original images, and the right side are the WB extraction results. The yellow color represents the background, the blue color represents the extracted WB. The area inside the yellow rectangle is a local magnified view of a certain section of the Weihe River mainstream.</p>
Full article ">Figure 13
<p>Long time-series WB extraction results in the Weihe River Basin from 2013 to 2021: (<b>a</b>) WB extraction accuracy and (<b>b</b>) WB area changes.</p>
Full article ">Figure 14
<p>Average high temperature days in the Weihe River Basin from 2013 to 2020.</p>
Full article ">Figure 15
<p>NINO 3.4 index from 2013 to 2021.</p>
Full article ">
29 pages, 4900 KiB  
Article
Forest Fire Severity and Koala Habitat Recovery Assessment Using Pre- and Post-Burn Multitemporal Sentinel-2 Msi Data
by Derek Campbell Johnson, Sanjeev Kumar Srivastava and Alison Shapcott
Forests 2024, 15(11), 1991; https://doi.org/10.3390/f15111991 - 11 Nov 2024
Viewed by 669
Abstract
Habitat loss due to wildfire is an increasing problem internationally for threatened animal species, particularly tree-dependent and arboreal animals. The koala (Phascolartos cinereus) is endangered in most of its range, and large areas of forest were burnt by widespread wildfires in [...] Read more.
Habitat loss due to wildfire is an increasing problem internationally for threatened animal species, particularly tree-dependent and arboreal animals. The koala (Phascolartos cinereus) is endangered in most of its range, and large areas of forest were burnt by widespread wildfires in Australia in 2019/2020, mostly areas dominated by eucalypts, which provide koala habitats. We studied the impact of fire and three subsequent years of recovery on a property in South-East Queensland, Australia. A classified Differenced Normalised Burn Ratio (dNBR) calculated from pre- and post-burn Sentinel-2 scenes encompassing the local study area was used to assess regional impact of fire on koala-habitat forest types. The geometrically structured composite burn index (GeoCBI), a field-based assessment, was used to classify fire severity impact. To detect lower levels of forest recovery, a manual classification of the multitemporal dNBR was used, enabling the direct comparison of images between recovery years. In our regional study area, the most suitable koala habitat occupied only about 2%, and about 10% of that was burnt by wildfire. From the five koala habitat forest types studied, one upland type was burnt more severely and extensively than the others but recovered vigorously after the first year, reaching the same extent of recovery as the other forest types. The two alluvial forest types showed a negligible fire impact, likely due to their sheltered locations. In the second year, all the impacted forest types studied showed further, almost equal, recovery. In the third year of recovery, there was almost no detectable change and therefore no more notable vegetative growth. Our field data revealed that the dNBR can probably only measure the general vegetation present and not tree recovery via epicormic shooting and coppicing. Eucalypt foliage growth is a critical resource for the koala, so field verification seems necessary unless more-accurate remote sensing methods such as hyperspectral imagery can be implemented. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map showing the regional study extent and the location of the local study area (LSA) therein (centre panel and right-hand legend and inset). The forest types comprising 70% or greater of the vegetation in an area are as follows: GBS = grey gum, mountain blue gum, and stringybark; IPR = ironbark on ridges; IBM = ironbark and mountain blue gum on microgranite; BFA = blue gum flats on alluvium inland; BCF = blue gum flats on alluvium closer to the coast. The upper left-hand panel shows the LSA with the locations of the ‘Plotless sites’ used to identify forest types (Regional Ecosystems) and collect the GeoCBI samples. Raw GeoCBI continuous values range from 0 to 3 and were derived from GeoCBI values from the sites, but they were reclassified in this study into classes 0–5. The lower left-hand panel shows the nearest towns and road network.</p>
Full article ">Figure 2
<p>Typical burn severities of eucalypt forest three months after the fire in the local study area. Ratings range from 0 to 5. (0) None. (1) Low (&lt;2 m scorch)—note recovery of the ground layer. (2) Moderate low (2–5 m scorch)—note some epicormic shooting. (3) Moderate high (just trunks scorched &gt;5 m OR a minority of crowns scorched). (4) High (most crowns scorched)—note recovery via epicormic shooting. Also note recovery of ground layer, which may look like canopy recovery from remote sensing. (5) Severe (crown foliage gone, ground bare)—note topkill and recovery via coppicing.</p>
Full article ">Figure 3
<p>Regional fire severity percentages of total area for each forest type tested (GBS, IPR, IBM, BFA, BCF) for 2019, the year of the fire. The regional study area defined by the Sentinel-2 scene is 12,056.04 km<sup>2</sup>. Forest types: GBS = grey gum, mountain blue gum, and stringybark (RE 12.12.23); IPR = ironbark on ridges (RE 12.12.12); IBM = ironbark and mountain blue gum on microgranite (RE 12.8.16); BFA = blue gum flats on alluvium inland (RE 12.3.3); BCF = blue gum flats on alluvium closer to the coast (RE 12.3.11). GeoCBI burn classes: 0 = none, 1 = low, 2 = moderate low, 3 = moderate high, 4 = high, and 5 = severe.</p>
Full article ">Figure 4
<p>Koala habitat areas suitable for refuge from fire. View covers the entire Sentinel-2 scene. Criteria are (1) koala habitat forest type; (2) fire severity class 0—unburnt; (3) proximity to burnt areas with moderate, high, and severe ratings—classes 2, 3, 4, and 5; and (4) proximity to recorded koala sightings.</p>
Full article ">Figure 5
<p>Koala habitat areas suitable for refuge from fire. View is local study area and its surrounds. Criteria are (1) koala habitat forest type; (2) fire severity class 0—unburnt; (3) proximity to burnt areas with moderate, high, and severe ratings—classes 2, 3, 4, and 5; and (4) proximity to recorded koala sightings.</p>
Full article ">Figure 6
<p>Normalised Difference Vegetation Index (NDVI) measured for two years prior to the fire in 2019, and three years after the fire, across 88 plotless sites in the local study area.</p>
Full article ">Figure 7
<p>Mean dNBR values for each forest type in each year. A negative dNBR indicates no burning, a dNBR of 0 is neutral (with no change between two years), and a dNBR &gt;= 1 in this study indicates severe burning. Forest types: GBS = grey gum, mountain blue gum, and stringybark (RE 12.12.23); IPR = ironbark on ridges (RE 12.12.12); IBM = ironbark and mountain blue gum on microgranite (RE 12.8.16); BFA = blue gum flats on alluvium inland (RE 12.3.3); BCF = blue gum flats on alluvium closer to the coast (RE 12.3.11).</p>
Full article ">Figure 8
<p>Burn severity classes for koala habitat forest types (Regional Ecosystems) at the regional scale across one whole Sentinel-2 scene (covering an area of 100 km × 100 km) in 2019, immediately after the fire. Negative dNBR indicates unburnt, a dNBR of 0 means neutral (with no change between two years), and dNBR ≥ 1 in this study indicates severely burnt. Forest types: GBS = grey gum, mountain blue gum, and stringybark (RE 12.12.23); IPR = ironbark on ridges (RE 12.12.12); IBM = ironbark and mountain blue gum on microgranite (RE 12.8.16); BFA = blue gum flats on alluvium inland (RE 12.3.3); BCF = blue gum flats on alluvium closer to the coast (RE 12.3.11).</p>
Full article ">Figure 9
<p>Burn severity classes for koala habitat forest types (Regional Ecosystems) at the regional scale across one whole Sentinel-2 scene (an area of 100 km × 100 km) in 2020, one year after the fire. Negative dNBR indicates unburnt, an dNBR of 0 means neutral (with no change between two years), and dNBR ≥ 1 in this study indicates severely burnt. Forest types: GBS = grey gum, mountain blue gum, and stringybark (RE 12.12.23); IPR = ironbark on ridges (RE 12.12.12); IBM = ironbark and mountain blue gum on microgranite (RE 12.8.16); BFA = blue gum flats on alluvium inland (RE 12.3.3); BCF = blue gum flats on alluvium closer to the coast (RE 12.3.11).</p>
Full article ">Figure 10
<p>Burn severity and recovery trend for forest type GBS (RE 12.12.23) and forest type IPR (RE 12.12.12) in the study region. Colour coding: red indicates year of fire, and other colours are years of recovery. Negative dNBR indicates unburnt, an dNBR of 0 means neutral (with no change between two years), and dNBR ≥ 1 in this study indicates severely burnt.</p>
Full article ">Figure 11
<p>Tree recovery responses from epicormic shooting versus dNBR value for each of 88 plotless sites within the local study area over three years for all forest types.</p>
Full article ">
19 pages, 10482 KiB  
Article
FFPNet: Fine-Grained Feature Perception Network for Semantic Change Detection on Bi-Temporal Remote Sensing Images
by Fengwei Zhang, Kai Xia, Jianxin Yin, Susu Deng and Hailin Feng
Remote Sens. 2024, 16(21), 4020; https://doi.org/10.3390/rs16214020 - 29 Oct 2024
Viewed by 612
Abstract
Semantic change detection (SCD) is a newly important topic in the field of remote sensing (RS) image interpretation since it provides semantic comprehension for bi-temporal RS images via predicting change regions and change types and has great significance for urban planning and ecological [...] Read more.
Semantic change detection (SCD) is a newly important topic in the field of remote sensing (RS) image interpretation since it provides semantic comprehension for bi-temporal RS images via predicting change regions and change types and has great significance for urban planning and ecological monitoring. With the availability of large scale bi-temporal RS datasets, various models based on deep learning (DL) have been widely applied in SCD. Since convolution operators in DL extracts two-dimensional feature matrices in the spatial dimension of images and stack feature matrices in the dimension termed the channel, feature maps of images are tri-dimensional. However, recent SCD models usually overlook the stereoscopic property of feature maps. Firstly, recent SCD models are usually limited in capturing spatial global features in the process of bi-temporal global feature extraction and overlook the global channel features. Meanwhile, recent SCD models only focus on spatial cross-temporal interaction in the process of change feature perception and ignore the channel interaction. Thus, to address above two challenges, a novel fine-grained feature perception network (FFPNet) is proposed in this paper, which employs the Omni Transformer (OiT) module to capture bi-temporal channel–spatial global features before utilizing the Omni Cross-Perception (OCP) module to achieve channel–spatial interaction between cross-temporal features. According to the experiments on the SECOND dataset and the LandsatSCD dataset, our FFPNet reaches competitive performance on both countryside and urban scenes compared with recent typical SCD models. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of the difference between BCD and SCD. The bottom part identifies the categories of semantic change masks of SCD and the right part identifies the categories of binary change masks of SCD; two SCMs of bi-temporal RS images should be viewed jointly, instead of separately viewing one map. For example, if position P in <math display="inline"><semantics> <msub> <mrow> <mi>S</mi> <mi>C</mi> <mi>M</mi> </mrow> <mn>1</mn> </msub> </semantics></math> is light green and corresponding position P in <math display="inline"><semantics> <msub> <mrow> <mi>S</mi> <mi>C</mi> <mi>M</mi> </mrow> <mn>2</mn> </msub> </semantics></math> is dark red, two change maps jointly indicate that position P has changed from tree to building. White regions in A and B indicate there is no change in T1 and T2 in these positions.</p>
Full article ">Figure 2
<p>This figure shows the differences among single-temporal channel inter-correlations, single-temporal spatial inter-correlations, cross-temporal channel interaction and cross-temporal spatial interaction. The edge, color and corner represent three different patterns in the channel dimension. The tree, ground and building represent three different positions in the spatial dimension. Subscripts are temporal tags. Double-sided arrows indicate computations of inter-correlations among different feature patches and interactions between bi-temporal features.</p>
Full article ">Figure 3
<p>An overview of our proposed fine-grained feature perception network (FFPNet) for SCD.</p>
Full article ">Figure 4
<p>An overview of the hierarchical feature encoder (HFE).</p>
Full article ">Figure 5
<p>An overview of our proposed Omni Transformer (OiT) Module.</p>
Full article ">Figure 6
<p>The left part illustrates the mining of global channel inter-correlations, and the right part illustrates then mining of global spatial inter-correlations. The big arrows indicate three dimensions of the feature map, and other small arrows indicate pair-wise patch interactions. The pink regions represent the channel sliding window, and the blue regions represent the spatial sliding window. The actual size of the channel sliding window is <math display="inline"><semantics> <mrow> <mn>8</mn> <mo>×</mo> <mn>128</mn> <mo>×</mo> <mn>128</mn> </mrow> </semantics></math>, which is set to <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>128</mn> <mo>×</mo> <mn>128</mn> </mrow> </semantics></math> in the figure simply for illustration. The actual size of the spatial sliding window is <math display="inline"><semantics> <mrow> <mn>128</mn> <mo>×</mo> <mn>8</mn> <mo>×</mo> <mn>8</mn> </mrow> </semantics></math>, which is set to <math display="inline"><semantics> <mrow> <mn>128</mn> <mo>×</mo> <mn>2</mn> <mo>×</mo> <mn>2</mn> </mrow> </semantics></math> in the figure simply for illustration.</p>
Full article ">Figure 7
<p>An overview of the stage of relative weight extraction (RWE) in our proposed Omni Cross-Perception (OCP) Module.</p>
Full article ">Figure 8
<p>An overview of the last two stages including cross-weight assignment and feature reduction in our proposed Omni Cross-Perception (OCP) Module.</p>
Full article ">Figure 9
<p>Visualized comparative evaluation. Two of samples at the top are selected from the experimental results of LandsatSCD, while others are selected from the experimental results of SECOND.</p>
Full article ">
15 pages, 6433 KiB  
Technical Note
RSPS-SAM: A Remote Sensing Image Panoptic Segmentation Method Based on SAM
by Zhuoran Liu, Zizhen Li, Ying Liang, Claudio Persello, Bo Sun, Guangjun He and Lei Ma
Remote Sens. 2024, 16(21), 4002; https://doi.org/10.3390/rs16214002 - 28 Oct 2024
Viewed by 983
Abstract
Satellite remote sensing images contain complex and diverse ground object information and the images exhibit spatial multi-scale characteristics, making the panoptic segmentation of satellite remote sensing images a highly challenging task. Due to the lack of large-scale annotated datasets for panoramic segmentation, existing [...] Read more.
Satellite remote sensing images contain complex and diverse ground object information and the images exhibit spatial multi-scale characteristics, making the panoptic segmentation of satellite remote sensing images a highly challenging task. Due to the lack of large-scale annotated datasets for panoramic segmentation, existing methods still suffer from weak model generalization capabilities. To mitigate this issue, this paper leverages the advantages of the Segment Anything Model (SAM), which can segment any object in remote sensing images without requiring any annotations and proposes a high-resolution remote sensing image panoptic segmentation method called Remote Sensing Panoptic Segmentation SAM (RSPS-SAM). Firstly, to address the problem of global information loss caused by cropping large remote sensing images for training, a Batch Attention Pyramid was designed to extract multi-scale features from remote sensing images and capture long-range contextual information between cropped patches, thereby enhancing the semantic understanding of remote sensing images. Secondly, we constructed a Mask Decoder to address the limitation of SAM requiring manual input prompts and its inability to output category information. This decoder utilized mask-based attention for mask segmentation, enabling automatic prompt generation and category prediction of segmented objects. Finally, the effectiveness of the proposed method was validated on the high-resolution remote sensing image airport scene dataset RSAPS-ASD. The results demonstrate that the proposed method achieves segmentation and recognition of foreground instances and background regions in high-resolution remote sensing images without the need for prompt input, while providing smooth segmentation boundaries with a panoptic segmentation quality (PQ) of 57.2, outperforming current mainstream methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Example of remote sensing image panoptic segmentation: (<b>a</b>) original remote sensing image; (<b>b</b>) corresponding panoptic segmentation image. To illustrate the requirement of panoptic segmentation to delineate instances of foreground objects, the figure uses distinctly different colors to represent individual airplane instances and outlines them with bounding boxes.</p>
Full article ">Figure 2
<p>The structure of RSPS-SAM. A Batch Attention Pyramid and a Mask Decoder are added to SAM, with the improvements marked by red lines. These enhancements include the addition of a Batch Attention Pyramid and a Mask Decoder. During the training process, SAM’s pre-training weight is adopted, and the parameters of the SAM Decoder are frozen so that they will not change. Other network parts participate in the training normally, and different parameter updating methods are marked with different colors and icons in the figure.</p>
Full article ">Figure 3
<p>The structure of the Batch Attention Pyramid, where BAM is the Batch Attention Module and UP is the upsampling.</p>
Full article ">Figure 4
<p>The structure of BAM. BS represents the training unit per GPU during training, C denotes number of channels in features, and H and W represent the height and width of the features.</p>
Full article ">Figure 5
<p>The structure of the Mask Decoder structure. The number of network layers in the Mask Decoder corresponds to the total levels of the input multi-scale feature maps, with each layer inputting a feature map from one scale level.</p>
Full article ">Figure 6
<p>The structure of the Mask Attention Layer.</p>
Full article ">Figure 7
<p>The examples of the RSAPS-ASD: (<b>a</b>) shows the true-color image and (<b>b</b>) displays the panoptic segmentation annotations, where background classes are represented by colors as indicated in the legend and each foreground airplane object instance is distinguished by a unique pixel value.</p>
Full article ">Figure 8
<p>Visual comparison of experimental results. (<b>a</b>) True-color remote sensing image, (<b>b</b>) ground truth annotations, (<b>c</b>) results from Panoptic-FPN, (<b>d</b>) results from Mask2Former, (<b>e</b>) results from Panoptic Segformer, (<b>f</b>) results from Mask DINO, and (<b>g</b>) results from the proposed RSPS-SAM. Background classes of ground objects are represented by colors as indicated in the legend, and each foreground airplane object instance is distinguished by a unique pixel value.</p>
Full article ">Figure 9
<p>Visualization results of the BAP. (<b>a</b>) The RGB image patches, (<b>b</b>) ground truth, (<b>c</b>) the segmentation results with the BAP, (<b>d</b>) the segmentation results without the BAP.</p>
Full article ">
22 pages, 7929 KiB  
Article
Remote Sensing LiDAR and Hyperspectral Classification with Multi-Scale Graph Encoder–Decoder Network
by Fang Wang, Xingqian Du, Weiguang Zhang, Liang Nie, Hu Wang, Shun Zhou and Jun Ma
Remote Sens. 2024, 16(20), 3912; https://doi.org/10.3390/rs16203912 - 21 Oct 2024
Viewed by 1058
Abstract
The rapid development of sensor technology has made multi-modal remote sensing data valuable for land cover classification due to its diverse and complementary information. Many feature extraction methods for multi-modal data, combining light detection and ranging (LiDAR) and hyperspectral imaging (HSI), have recognized [...] Read more.
The rapid development of sensor technology has made multi-modal remote sensing data valuable for land cover classification due to its diverse and complementary information. Many feature extraction methods for multi-modal data, combining light detection and ranging (LiDAR) and hyperspectral imaging (HSI), have recognized the importance of incorporating multiple spatial scales. However, effectively capturing both long-range global correlations and short-range local features simultaneously on different scales remains a challenge, particularly in large-scale, complex ground scenes. To address this limitation, we propose a multi-scale graph encoder–decoder network (MGEN) for multi-modal data classification. The MGEN adopts a graph model that maintains global sample correlations to fuse multi-scale features, enabling simultaneous extraction of local and global information. The graph encoder maps multi-modal data from different scales to the graph space and completes feature extraction in the graph space. The graph decoder maps the features of multiple scales back to the original data space and completes multi-scale feature fusion and classification. Experimental results on three HSI-LiDAR datasets demonstrate that the proposed MGEN achieves considerable classification accuracies and outperforms state-of-the-art methods. Full article
(This article belongs to the Special Issue 3D Scene Reconstruction, Modeling and Analysis Using Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Overall framework of the proposed MGEN for multi-modal data classification.</p>
Full article ">Figure 2
<p>Structure of the graph encoder.</p>
Full article ">Figure 3
<p>Visualized classification results of proposed MGEN and compared methods on Trento dataset.</p>
Full article ">Figure 4
<p>Visualized classification results of proposed MGEN and compared methods on MUUFL dataset.</p>
Full article ">Figure 5
<p>Visualized classification results of proposed MGEN and compared methods on Houston dataset.</p>
Full article ">Figure 6
<p>Experimental results of using a single scale with different values of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">Figure 7
<p>Classification accuracy results of the proposed MGEN with different scale parameters on MUUFL dataset. OA, AA, and Kappa are displayed by colors of blue, orange, and green, respectively, with the intensity of the colors indicating the magnitude of the values.</p>
Full article ">
23 pages, 6173 KiB  
Article
Scene Classification of Remote Sensing Image Based on Multi-Path Reconfigurable Neural Network
by Wenyi Hu, Chunjie Lan, Tian Chen, Shan Liu, Lirong Yin and Lei Wang
Land 2024, 13(10), 1718; https://doi.org/10.3390/land13101718 - 20 Oct 2024
Viewed by 634
Abstract
Land image recognition and classification and land environment detection are important research fields in remote sensing applications. Because of the diversity and complexity of different tasks of land environment recognition and classification, it is difficult for researchers to use a single model to [...] Read more.
Land image recognition and classification and land environment detection are important research fields in remote sensing applications. Because of the diversity and complexity of different tasks of land environment recognition and classification, it is difficult for researchers to use a single model to achieve the best performance in scene classification of multiple remote sensing land images. Therefore, to determine which model is the best for the current recognition classification tasks, it is often necessary to select and experiment with many different models. However, finding the optimal model is accompanied by an increase in trial-and-error costs and is a waste of researchers’ time, and it is often impossible to find the right model quickly. To address the issue of existing models being too large for easy selection, this paper proposes a multi-path reconfigurable network structure and takes the multi-path reconfigurable residual network (MR-ResNet) model as an example. The reconfigurable neural network model allows researchers to selectively choose the required modules and reassemble them to generate customized models by splitting the trained models and connecting them through modules with different properties. At the same time, by introducing the concept of a multi-path input network, the optimal path is selected by inputting different modules, which shortens the training time of the model and allows researchers to easily find the network model suitable for the current application scenario. A lot of training data, computational resources, and model parameter experience are saved. Three public datasets, NWPU-RESISC45, RSSCN7, and SIRI-WHU datasets, were used for the experiments. The experimental results demonstrate that the proposed model surpasses the classic residual network (ResNet) in terms of both parameters and performance. Full article
(This article belongs to the Special Issue GeoAI for Land Use Observations, Analysis and Forecasting)
Show Figures

Figure 1

Figure 1
<p>Split strategy by function, each color in the diagram represents a model, and each circle represents a module. There are three sections: the left side shows various pre-trained models in the open-source community, the middle shows the steps to split the pre-trained model, and the right side shows the multiple independent modules after the models are split.</p>
Full article ">Figure 2
<p>The blue modules correspond to Model A, which has a small number of parameters. The orange modules represent Model B, characterized by a large number of parameters. The gray module signifies the split module, which is not used. and the red modules in this Figure are reorganization modules. Where (<b>a</b>) is the third and fourth modules of the B model spliced with the first module of the A model, (<b>b</b>) is the fourth module of the B models placed with the first module of the A model, (<b>c</b>) is the fourth module of the B model spliced with the first and second module of the A model, (<b>d</b>) is the third and fourth module of the B model spliced with the second module of the A model, (<b>e</b>) is the fourth module of the B model spliced with the second module of the A model, and (<b>f</b>) splices the third module of model A into the fourth module of model B.</p>
Full article ">Figure 2 Cont.
<p>The blue modules correspond to Model A, which has a small number of parameters. The orange modules represent Model B, characterized by a large number of parameters. The gray module signifies the split module, which is not used. and the red modules in this Figure are reorganization modules. Where (<b>a</b>) is the third and fourth modules of the B model spliced with the first module of the A model, (<b>b</b>) is the fourth module of the B models placed with the first module of the A model, (<b>c</b>) is the fourth module of the B model spliced with the first and second module of the A model, (<b>d</b>) is the third and fourth module of the B model spliced with the second module of the A model, (<b>e</b>) is the fourth module of the B model spliced with the second module of the A model, and (<b>f</b>) splices the third module of model A into the fourth module of model B.</p>
Full article ">Figure 3
<p>Multipath reconfigurable network channel selection, where gray parts represent unused layers, orange, yellow, and green parts represent layers 2, 3, and 4 of ResNet18. 3*3 is the kernal size for each Conv, and the number of channel for each Conv is listed below. (<b>a</b>) is layer 2 and layer 4 of ResNet18, and (<b>b</b>) is layer 3 and layer 4 of ResNet18.</p>
Full article ">Figure 4
<p>RSSCN7 dataset.</p>
Full article ">Figure 5
<p>SIRI-WHU dataset.</p>
Full article ">Figure 6
<p>NWPU-RESISC45 dataset.</p>
Full article ">Figure 7
<p>BestA model confusion matrix, the depth of the color indicates the value of each cell, the darker the color means more samples in this category in the classification result, the lighter the color means less samples in this category.</p>
Full article ">Figure 8
<p>Parking lot category thermal maps. The red area is the high feature information contribution degree, the blue area is the low feature information contribution degree, and the brightness of each color reflects the influence degree of this position on the classification output. The greater the brightness, the greater the influence, where (<b>a</b>) is the original image, and (<b>b</b>,<b>c</b>) are thermal maps of the BestA, and ResNet.</p>
Full article ">Figure 9
<p>BestA model residential area-type thermal maps, where (<b>a</b>) is the original image, and (<b>b</b>,<b>c</b>) are thermal maps of the BestA, and ResNet.</p>
Full article ">Figure 10
<p>Confusion matrix of JoinG model.</p>
Full article ">Figure 11
<p>JoinG model industrial area category thermal maps. Where (<b>a</b>) is the original image, and (<b>b</b>,<b>c</b>) are thermal maps of the JoinG, and ResNet.</p>
Full article ">Figure 12
<p>JoinG model river category thermal maps. Where (<b>a</b>) is the original image, and (<b>b</b>,<b>c</b>) are thermal maps of the JoinG, and ResNet.</p>
Full article ">Figure 13
<p>The BestC model aircraft category thermal maps. (<b>a</b>–<b>f</b>) are the original, BestC Thermal maps, ResNet18 thermal maps, ResNet34 thermal maps, ResNet50 thermal maps, and ResNet101 thermal maps, respectively.</p>
Full article ">Figure 13 Cont.
<p>The BestC model aircraft category thermal maps. (<b>a</b>–<b>f</b>) are the original, BestC Thermal maps, ResNet18 thermal maps, ResNet34 thermal maps, ResNet50 thermal maps, and ResNet101 thermal maps, respectively.</p>
Full article ">Figure 14
<p>The BestC model highway category thermal maps. (<b>a</b>–<b>f</b>) are the original map, the thermal map of BestC, the thermal map of ResNet18, the thermal map of ResNet34, the thermal map of ResNet50, and the thermal map of ResNet101, respectively.</p>
Full article ">Figure 15
<p>Accuracy plot for each mode.</p>
Full article ">
20 pages, 1584 KiB  
Article
Hyperspectral Image Classification Algorithm for Forest Analysis Based on a Group-Sensitive Selective Perceptual Transformer
by Shaoliang Shi, Xuyang Li, Xiangsuo Fan and Qi Li
Appl. Sci. 2024, 14(20), 9553; https://doi.org/10.3390/app14209553 - 19 Oct 2024
Viewed by 809
Abstract
Substantial advancements have been achieved in hyperspectral image (HSI) classification through contemporary deep learning techniques. Nevertheless, the incorporation of an excessive number of irrelevant tokens in large-scale remote sensing data results in inefficient long-range modeling. To overcome this hurdle, this study introduces the [...] Read more.
Substantial advancements have been achieved in hyperspectral image (HSI) classification through contemporary deep learning techniques. Nevertheless, the incorporation of an excessive number of irrelevant tokens in large-scale remote sensing data results in inefficient long-range modeling. To overcome this hurdle, this study introduces the Group-Sensitive Selective Perception Transformer (GSAT) framework, which builds upon the Vision Transformer (ViT) to enhance HSI classification outcomes. The innovation of the GSAT architecture is primarily evident in several key aspects. Firstly, the GSAT incorporates a Group-Sensitive Pixel Group Mapping (PGM) module, which organizes pixels into distinct groups. This allows the global self-attention mechanism to function within these groupings, effectively capturing local interdependencies within spectral channels. This grouping tactic not only boosts the model’s spatial awareness but also lessens computational complexity, enhancing overall efficiency. Secondly, the GSAT addresses the detrimental effects of superfluous tokens on model efficacy by introducing the Sensitivity Selection Framework (SSF) module. This module selectively identifies the most pertinent tokens for classification purposes, thereby minimizing distractions from extraneous information and bolstering the model’s representational strength. Furthermore, the SSF refines local representation through multi-scale feature selection, enabling the model to more effectively encapsulate feature data across various scales. Additionally, the GSAT architecture adeptly represents both global and local features of HSI data by merging global self-attention with local feature extraction. This integration strategy not only elevates classification precision but also enhances the model’s versatility in navigating complex scenes, particularly in urban mapping scenarios where it significantly outclasses previous deep learning methods. The advent of the GSAT architecture not only rectifies the inefficiencies of traditional deep learning approaches in processing extensive remote sensing imagery but also markededly enhances the performance of HSI classification tasks through the deployment of group-sensitive and selective perception mechanisms. It presents a novel viewpoint within the domain of hyperspectral image classification and is poised to propel further advancements in the field. Empirical testing on six standard HSI datasets confirms the superior performance of the proposed GSAT method in HSI classification, especially within urban mapping contexts, where it exceeds the capabilities of prior deep learning techniques. In essence, the GSAT architecture markedly refines HSI classification by pioneering group-sensitive pixel group mapping and selective perception mechanisms, heralding a significant breakthrough in hyperspectral image processing. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the GSAT architecture for hyperspectral image classification tasks.</p>
Full article ">Figure 2
<p>Classification map of the Salinas dataset. (<b>a</b>) image; (<b>b</b>) gt; (<b>c</b>) AB-LSTM; (<b>d</b>) 3D-CNN; (<b>e</b>) SPEFORMER; (<b>f</b>) SSFTT; (<b>g</b>) M3D-DCNN; (<b>h</b>) DFFN; (<b>i</b>) RSSAN; (<b>j</b>) OUR.</p>
Full article ">Figure 3
<p>Classification map of the HyRANK-Loukia dataset. (<b>a</b>) image; (<b>b</b>) gt; (<b>c</b>) AB-LSTM; (<b>d</b>) 3D-CNN; (<b>e</b>) SPEFORMER; (<b>f</b>) SSFTT; (<b>g</b>) M3D-DCNN; (<b>h</b>) DFFN; (<b>i</b>) RSSAN; (<b>j</b>) OUR.</p>
Full article ">Figure 4
<p>Classification map of the Pavia University dataset. (<b>a</b>) image; (<b>b</b>) gt; (<b>c</b>) AB-LSTM; (<b>d</b>) 3D-CNN; (<b>e</b>) SPEFORMER; (<b>f</b>) SSFTT; (<b>g</b>) M3D-DCNN; (<b>h</b>) DFFN; (<b>i</b>) RSSAN; (<b>j</b>) OUR.</p>
Full article ">Figure 5
<p>Classification map of the WHU-Hi-HongHu dataset. (<b>a</b>) image; (<b>b</b>) gt; (<b>c</b>) AB-LSTM; (<b>d</b>) 3D-CNN; (<b>e</b>) SPEFORMER; (<b>f</b>) SSFTT; (<b>g</b>) M3D-DCNN; (<b>h</b>) DFFN; (<b>i</b>) RSSAN; (<b>j</b>) OUR.</p>
Full article ">Figure 6
<p>Classification map of the WHU-Hi-HanChuan dataset. (<b>a</b>) image; (<b>b</b>) gt; (<b>c</b>) AB-LSTM; (<b>d</b>) 3D-CNN; (<b>e</b>) SPEFORMER; (<b>f</b>) SSFTT; (<b>g</b>) M3D-DCNN; (<b>h</b>) DFFN; (<b>i</b>) RSSAN; (<b>j</b>) OUR.</p>
Full article ">Figure 7
<p>Classification map of the WHU-Hi-LongKou dataset. (<b>a</b>) image; (<b>b</b>) gt; (<b>c</b>) AB-LSTM; (<b>d</b>) 3D-CNN; (<b>e</b>) SPEFORMER; (<b>f</b>) SSFTT; (<b>g</b>) M3D-DCNN; (<b>h</b>) DFFN; (<b>i</b>) RSSAN; (<b>j</b>) OUR.</p>
Full article ">Figure 8
<p>The detailed epochs curve of training.</p>
Full article ">
21 pages, 3845 KiB  
Article
Semantic Segmentation of Satellite Images for Landslide Detection Using Foreground-Aware and Multi-Scale Convolutional Attention Mechanism
by Chih-Chang Yu, Yuan-Di Chen, Hsu-Yung Cheng and Chi-Lun Jiang
Sensors 2024, 24(20), 6539; https://doi.org/10.3390/s24206539 - 10 Oct 2024
Viewed by 576
Abstract
Advancements in satellite and aerial imagery technology have made it easier to obtain high-resolution remote sensing images, leading to widespread research and applications in various fields. Remote sensing image semantic segmentation is a crucial task that provides semantic and localization information for target [...] Read more.
Advancements in satellite and aerial imagery technology have made it easier to obtain high-resolution remote sensing images, leading to widespread research and applications in various fields. Remote sensing image semantic segmentation is a crucial task that provides semantic and localization information for target objects. In addition to the large-scale variation issues common in most semantic segmentation datasets, aerial images present unique challenges, including high background complexity and imbalanced foreground–background ratios. However, general semantic segmentation methods primarily address scale variations in natural scenes and often neglect the specific challenges in remote sensing images, such as inadequate foreground modeling. In this paper, we present a foreground-aware remote sensing semantic segmentation model. The model introduces a multi-scale convolutional attention mechanism and utilizes a feature pyramid network architecture to extract multi-scale features, addressing the multi-scale problem. Additionally, we introduce a Foreground–Scene Relation Module to mitigate false alarms. The model enhances the foreground features by modeling the relationship between the foreground and the scene. In the loss function, a Soft Focal Loss is employed to focus on foreground samples during training, alleviating the foreground–background imbalance issue. Experimental results indicate that our proposed method outperforms current state-of-the-art general semantic segmentation methods and transformer-based methods on the LS dataset benchmark. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The challenges of high-resolution remote sensing imagery.</p>
Full article ">Figure 2
<p>Overall architecture of the proposed system.</p>
Full article ">Figure 3
<p>Backbone architecture using MSCAN.</p>
Full article ">Figure 4
<p>A block of Multi-Scale Convolutional Attention Network.</p>
Full article ">Figure 5
<p>Feature Pyramid Networks.</p>
Full article ">Figure 6
<p>Detail of layer <span class="html-italic">i</span> in the foreground–scene relationship module.</p>
Full article ">Figure 7
<p>Decoder Architecture.</p>
Full article ">Figure 8
<p>LS Dataset example (patch size:<math display="inline"><semantics> <mrow> <mo> </mo> <mn>512</mn> <mo>×</mo> <mn>512</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 9
<p>Visualization of Detection Results Using Different Methods. The detection results are highlighted using red boxes indicating the incorrect detections by other models and the green boxes indicating the better detection details of the proposed method.</p>
Full article ">Figure 9 Cont.
<p>Visualization of Detection Results Using Different Methods. The detection results are highlighted using red boxes indicating the incorrect detections by other models and the green boxes indicating the better detection details of the proposed method.</p>
Full article ">Figure 10
<p>Detection Results for Large-Scale Objects.</p>
Full article ">Figure 11
<p>Comparison of Speed (FPS) and Accuracy (mIoU) on the LS Dataset.</p>
Full article ">
20 pages, 8242 KiB  
Article
A Scene Graph Similarity-Based Remote Sensing Image Retrieval Algorithm
by Yougui Ren, Zhibin Zhao, Junjian Jiang, Yuning Jiao, Yining Yang, Dawei Liu, Kefu Chen and Ge Yu
Appl. Sci. 2024, 14(18), 8535; https://doi.org/10.3390/app14188535 - 22 Sep 2024
Viewed by 910
Abstract
With the rapid development of remote sensing image data, the efficient retrieval of target images of interest has become an important issue in various applications including computer vision and remote sensing. This research addressed the low-accuracy problem in traditional content-based image retrieval algorithms, [...] Read more.
With the rapid development of remote sensing image data, the efficient retrieval of target images of interest has become an important issue in various applications including computer vision and remote sensing. This research addressed the low-accuracy problem in traditional content-based image retrieval algorithms, which largely rely on comparing entire image features without capturing sufficient semantic information. We proposed a scene graph similarity-based remote sensing image retrieval algorithm. Firstly, a one-shot object detection algorithm was designed for remote sensing images based on Siamese networks and tailored to the objects of an unknown class in the query image. Secondly, a scene graph construction algorithm was developed, based on the objects and their attributes and spatial relationships. Several construction strategies were designed based on different relationships, including full connections, random connections, nearest connections, star connections, or ring connections. Thirdly, by making full use of edge features for scene graph feature extraction, a graph feature extraction network was established based on edge features. Fourthly, a neural tensor network-based similarity calculation algorithm was designed for graph feature vectors to obtain image retrieval results. Fifthly, a dataset named remote sensing images with scene graphs (RSSG) was built for testing, which contained 929 remote sensing images with their corresponding scene graphs generated by the developed construction strategies. Finally, through performance comparison experiments with remote sensing image retrieval algorithms AMFMN, MiLaN, and AHCL, in precision rates, Precision@1 improved by 10%, 7.2%, and 5.2%, Precision@5 improved by 3%, 5%, and 1.7%; and Precision@10 improved by 1.7%, 3%, and 0.6%. In recall rates, Recall@1 improved by 2.5%, 4.3%, and 1.3%; Recall@5 improved by 3.7%, 6.2%, and 2.1%; and Recall@10 improved by 4.4%, 7.7% and 1.6%. Full article
(This article belongs to the Special Issue Deep Learning for Graph Management and Analytics)
Show Figures

Figure 1

Figure 1
<p>Examples of sensing image retrieval based on semantic features.</p>
Full article ">Figure 2
<p>The developed remote sensing image retrieval algorithm.</p>
Full article ">Figure 3
<p>The schematic diagram of the SiamACDet algorithm.</p>
Full article ">Figure 4
<p>Flowcharts of the residual and AC modules.</p>
Full article ">Figure 5
<p>The flowchart of the ACSE module.</p>
Full article ">Figure 6
<p>The flowchart of the FPN-structure-based ACSENet.</p>
Full article ">Figure 7
<p>The schematic diagram of the ARPN.</p>
Full article ">Figure 8
<p>The flowchart of the double-head detector.</p>
Full article ">Figure 9
<p>Example of the scene image construction method.</p>
Full article ">Figure 10
<p>The EGCN module.</p>
Full article ">Figure 11
<p>Flowchart of the feature similarity calculation module.</p>
Full article ">Figure 12
<p>Sample of the RSSG dataset.</p>
Full article ">
12 pages, 3941 KiB  
Article
A Reflective Spectroscopy Proof-of-Concept Study of Urea for Supporting Investigations of Human Waste in Multiple Forensic Contexts
by Lilly McClelland, Ethan Belak, Juliana Curtis, Ethan Krekeler, April Sanders and Mark P. S. Krekeler
Forensic Sci. 2024, 4(3), 463-474; https://doi.org/10.3390/forensicsci4030030 - 20 Sep 2024
Viewed by 882
Abstract
Human urine and its detection are of interest in forensic studies in numerous contexts. Both crystalline urea and 1.0 M solutions of urea, as synthetic analog endmember components of human urine, were investigated as a proof-of-concept study to determine if detailed lab spectroscopy [...] Read more.
Human urine and its detection are of interest in forensic studies in numerous contexts. Both crystalline urea and 1.0 M solutions of urea, as synthetic analog endmember components of human urine, were investigated as a proof-of-concept study to determine if detailed lab spectroscopy would be viable. Urea was reliably detected on Ottawa sand at concentrations of approximately 3.2% in dried experiments. Urea was detectable after 1 week of solution evaporation under lab conditions, at 9.65 wt.% 1 M solution. This investigation successfully establishes urea as a material of interest for reflective spectroscopy and hyperspectral remote sensing/image spectroscopy on a wide range of spatial scales, from specific centimeter-scale areas in a crime scene to searching large outdoor regions > 1 km2. In addition, this investigation is relevant to improving the monitoring of human trafficking, status and condition of refugee camps, and monitoring sewage. Full article
(This article belongs to the Special Issue The Role of Forensic Geology in Criminal Investigations)
Show Figures

Figure 1

Figure 1
<p>Images showing the structure of the materials of interest. Urea is a carbonyl group with two C-bound amine groups (<b>left</b>). The crystal structure of quartz projected along the c-axis is shown (<b>right</b>).</p>
Full article ">Figure 2
<p>The top row shows PPL images of the Ottawa sand (<b>A</b>,<b>C</b>), and the bottom row shows XPL images (<b>B</b>,<b>D</b>). Note that every grain is quartz, and there are numerous inclusions of fluid and likely Fe-oxide minerals as shown in (<b>E</b>,<b>F</b>).</p>
Full article ">Figure 3
<p>SEM images showing Ottawa sand quartz grain morphology with EDS spectra. (<b>A</b>) An example of a well-rounded quartz grain with oxygen and silicon lines in the commensurate EDS spectrum. (<b>B</b>) An example of well-rounded quartz grain showing an iron-oxide inclusion near the grain surface, with oxygen, silicon, minor K and iron lines in the commensurate EDS spectrum. K originates from inclusions below the grain surface, within the volume of inter-action of the beam but not expressed at the surface.</p>
Full article ">Figure 4
<p>Reflective spectra of the materials used in this study, urea (<b>left</b>) and Ottawa sand (<b>right</b>).</p>
Full article ">Figure 5
<p>Reflective spectra of urea and sand combinations. Note that features of urea start to become evident in the sample with 3.2% urea.</p>
Full article ">Figure 6
<p>Reflective spectra of 1.0 M urea solution and sand combinations. Weight percent of 1.0 M solution is displayed on the right side of the graphs. Note that features of urea start to become evident in the third sample on each graph.</p>
Full article ">
23 pages, 8710 KiB  
Article
Sea–Land Segmentation of Remote-Sensing Images with Prompt Mask-Attention
by Yingjie Ji, Weiguo Wu, Shiqiang Nie, Jinyu Wang and Song Liu
Remote Sens. 2024, 16(18), 3432; https://doi.org/10.3390/rs16183432 - 16 Sep 2024
Cited by 1 | Viewed by 664
Abstract
Remote-sensing technology has gradually become one of the most important ways to extract sea–land boundaries due to its large scale, high efficiency, and low cost. However, sea–land segmentation (SLS) is still a challenging problem because of data diversity and inconsistency, “different objects with [...] Read more.
Remote-sensing technology has gradually become one of the most important ways to extract sea–land boundaries due to its large scale, high efficiency, and low cost. However, sea–land segmentation (SLS) is still a challenging problem because of data diversity and inconsistency, “different objects with the same spectrum” or “the same object with different spectra”, and noise and interference problems, etc. In this paper, a new sea–land segmentation method (PMFormer) for remote-sensing images is proposed. The contributions are mainly two points. First, based on Mask2Former architecture, we introduce the prompt mask by normalized difference water index (NDWI) of the target image and prompt encoder architecture. The prompt mask provides more reasonable constraints for attention so that the segmentation errors are alleviated in small region boundaries and small branches, which are caused by insufficiency of prior information by large data diversity or inconsistency. Second, for the large intra-class difference problem in the foreground–background segmentation in sea–land scenes, we use deep clustering to simplify the query vectors and make them more suitable for binary segmentation. Then, traditional NDWI and eight other deep-learning methods are thoroughly compared with the proposed PMFormer on three open sea–land datasets. The efficiency of the proposed method is confirmed, after the quantitative analysis, qualitative analysis, time consumption, error distribution, etc. are presented by detailed contrast experiments. Full article
Show Figures

Figure 1

Figure 1
<p>Coastline definition and sea–land edges [<a href="#B20-remotesensing-16-03432" class="html-bibr">20</a>].</p>
Full article ">Figure 2
<p>Spectrum of water and other materials [<a href="#B21-remotesensing-16-03432" class="html-bibr">21</a>].</p>
Full article ">Figure 3
<p>The overview of the PMFormer.</p>
Full article ">Figure 4
<p>The block for the prompt encoder and the block for the fusion decoder.</p>
Full article ">Figure 5
<p>Results of different methods for SWED dataset.</p>
Full article ">Figure 6
<p>Results of different methods for SLL8 dataset.</p>
Full article ">Figure 7
<p>Results of different methods for SLGF dataset.</p>
Full article ">Figure 8
<p>The result of a large image by PMFormer. The size of the original image is 8000 × 2500. The sea–land boundaries are marked by curve lines with different color.</p>
Full article ">Figure 9
<p>Error areas of different methods for the SWED dataset. The blue areas are where the land is mistakenly divided into water. The green areas are where water is mistakenly divided into land.</p>
Full article ">Figure 10
<p>Error areas of different methods for the SLGF dataset. The blue areas are where the land is mistakenly divided into water. The green areas are where water is mistakenly divided into land.</p>
Full article ">
20 pages, 6898 KiB  
Article
Estimation of Maize Biomass at Multi-Growing Stage Using Stem and Leaf Separation Strategies with 3D Radiative Transfer Model and CNN Transfer Learning
by Dan Zhao, Hao Yang, Guijun Yang, Fenghua Yu, Chengjian Zhang, Riqiang Chen, Aohua Tang, Wenjie Zhang, Chen Yang and Tongyu Xu
Remote Sens. 2024, 16(16), 3000; https://doi.org/10.3390/rs16163000 - 15 Aug 2024
Cited by 1 | Viewed by 1192
Abstract
The precise estimation of above-ground biomass (AGB) is imperative for the advancement of breeding programs. Optical variables, such as vegetation indices (VI), have been extensively employed in monitoring AGB. However, the limited robustness of inversion models remains a significant impediment to the widespread [...] Read more.
The precise estimation of above-ground biomass (AGB) is imperative for the advancement of breeding programs. Optical variables, such as vegetation indices (VI), have been extensively employed in monitoring AGB. However, the limited robustness of inversion models remains a significant impediment to the widespread application of UAV-based multispectral remote sensing in AGB inversion. In this study, a novel stem–leaf separation strategy for AGB estimation is delineated. Convolutional neural network (CNN) and transfer learning (TL) methodologies are integrated to estimate leaf biomass (LGB) across multiple growth stages, followed by the development of an allometric growth model for estimating stem biomass (SGB). To enhance the precision of LGB inversion, the large-scale remote sensing data and image simulation framework over heterogeneous scenes (LESS) model, which is a three-dimensional (3D) radiative transfer model (RTM), was utilized to simulate a more extensive canopy spectral dataset, characterized by a broad distribution of canopy spectra. The CNN model was pre-trained in order to gain prior knowledge, and this knowledge was transferred to a re-trained model with a subset of field-observed samples. Finally, the allometric growth model was utilized to estimate SGB across various growth stages. To further validate the generalizability, transferability, and predictive capability of the proposed method, field samples from 2022 and 2023 were employed as target tasks. The results demonstrated that the 3D RTM + CNN + TL method outperformed best in LGB estimation, achieving an R² of 0.73 and an RMSE of 72.5 g/m² for the 2022 dataset, and an R² of 0.84 and an RMSE of 56.4 g/m² for the 2023 dataset. In contrast, the PROSAIL method yielded an R² of 0.45 and an RMSE of 134.55 g/m² for the 2022 dataset, and an R² of 0.74 and an RMSE of 61.84 g/m² for the 2023 dataset. The accuracy of LGB inversion was poor when using only field-measured samples to train a CNN model without simulated data, with R² values of 0.30 and 0.74. Overall, learning prior knowledge from the simulated dataset and transferring it to a new model significantly enhanced LGB estimation accuracy and model generalization. Additionally, the allometric growth model’s estimation of SGB resulted in an accuracy of 0.87 and 120.87 g/m² for the 2022 dataset, and 0.74 and 86.87 g/m² for the 2023 dataset, exhibiting satisfactory results. Separate estimation of both LGB and SGB based on stem and leaf separation strategies yielded promising results. This method can be extended to the monitor and inversion of other critical variables. Full article
Show Figures

Figure 1

Figure 1
<p>Geographical location of the experimental sites and UAV digital images acquired in 2021, 2022 and 2023. Note: (<b>a</b>) the geographical location of all experiments; (<b>b</b>) experiment 1 was conducted with plant densities treatment in 2021; (<b>c</b>) experiment 2 was conducted with plant densities treatment in 2022; (<b>d</b>) experiment 2 was conducted with nitrogen gradient treatment in 2023; A1–A5, B4, B5, C6, and C7 are Zhengdan 958, Jiyuan 1, Jiyuan 168, Jingjiuqingchu 16, Nongkenuo 336, Dajingjiu 26, Jingnongke 728, Tianci 19, and Jingnuo 2008, respectively.</p>
Full article ">Figure 2
<p>The measurement position of the leaf angle. The red point is the center position of the leaf.</p>
Full article ">Figure 3
<p>Proposed 1D CNN architecture for estimating LGB. The convolution layers are named Cov1, Cov2, Cov3, and Cov4. Letter B represents the bath normalization layer. Letter R represents the ReLU layer.</p>
Full article ">Figure 4
<p>Pearson’s correlation coefficients between 20 vegetation indices and LGB. DVI, EVI, VIRed, MSAVI, MTVI2, NDVI, SAVI, MCARI, and SIPI exhibit high correlation, whereas SCCCI, CI1, NDRE, VIRedge, and VIGreen show low correlation.</p>
Full article ">Figure 5
<p>R<sup>2</sup> and RMSE of CNN model with different number of VIs based on simulated dataset.</p>
Full article ">Figure 6
<p>Value distribution of the simulated and measured VIs; (<b>a</b>–<b>l</b>) represent the data density of 12 VIs of simulate data and UAV data; orange line is measured UAV data; blue line is simulated data; y-axis represents the value density; <span class="html-italic">x</span>-axis represents VIs value.</p>
Full article ">Figure 7
<p>Spectral reflectance curves for three growth stages from simulated and field-measured datasets. As the growth stages progress, LAI increases accompanied by a corresponding increase in LGB.</p>
Full article ">Figure 8
<p>Measured and estimated LGB from year 2022 and 2023 from three stages. (<b>a</b>–<b>c</b>) 3D RTM + CNN method; (<b>d</b>–<b>f</b>) PROSAIL + CNN + TL method.</p>
Full article ">Figure 8 Cont.
<p>Measured and estimated LGB from year 2022 and 2023 from three stages. (<b>a</b>–<b>c</b>) 3D RTM + CNN method; (<b>d</b>–<b>f</b>) PROSAIL + CNN + TL method.</p>
Full article ">Figure 9
<p>The loss function value of the training set and testing set during re-training the model.</p>
Full article ">Figure 10
<p>Measured and estimated stem biomass in 2022 and 2023. (<b>a</b>) scatter plot between estimated SGB of 2022 year and measured SGB using allometric growth model; (<b>b</b>) scatter plot between estimated SGB of 2023 year and measured SGB using allometric growth model; The blue points in each figure included three growth stage.</p>
Full article ">Figure 11
<p>Measured and predicted 2023 LGB between four ablation experiments. The black virtual line represents 1:1 line. (<b>a</b>) represents experiment E1. (<b>b</b>) represents experiment E2. (<b>c</b>) represents experiment E3. The result of E4 experiment is shown in <a href="#remotesensing-16-03000-f008" class="html-fig">Figure 8</a>c.</p>
Full article ">Figure 12
<p>Measured and estimated LGB from 3D RTM + PLSR, and CNN methods. (<b>a</b>–<b>c</b>) scatter plot between estimated LAG and measured LAG using CNN method; (<b>a</b>) 2021 samples; (<b>b</b>) 2022 samples; (<b>c</b>) 2023 samples; (<b>d</b>–<b>f</b>) scatter plot between estimated LAG and measured LAG using 3D PLSR + PLSR method; (<b>a</b>) 2021 samples; (<b>b</b>) 2022 samples; (<b>c</b>) 2023 samples; The black line is 1:1 line.</p>
Full article ">
17 pages, 4776 KiB  
Article
Optimization of Remote-Sensing Image-Segmentation Decoder Based on Multi-Dilation and Large-Kernel Convolution
by Guohong Liu, Cong Liu, Xianyun Wu, Yunsong Li, Xiao Zhang and Junjie Xu
Remote Sens. 2024, 16(15), 2851; https://doi.org/10.3390/rs16152851 - 3 Aug 2024
Viewed by 870
Abstract
Land-cover segmentation, a fundamental task within the domain of remote sensing, boasts a broad spectrum of application potential. We address the challenges in land-cover segmentation of remote-sensing imagery and complete the following work. Firstly, to tackle the issues of foreground–background imbalance and scale [...] Read more.
Land-cover segmentation, a fundamental task within the domain of remote sensing, boasts a broad spectrum of application potential. We address the challenges in land-cover segmentation of remote-sensing imagery and complete the following work. Firstly, to tackle the issues of foreground–background imbalance and scale variation, a module based on multi-dilated rate convolution fusion was integrated into a decoder. This module extended the receptive field through multi-dilated convolution, enhancing the model’s capability to capture global features. Secondly, to address the diversity of scenes and background interference, a hybrid attention module based on large-kernel convolution was employed to improve the performance of the decoder. This module, based on a combination of spatial and channel attention mechanisms, enhanced the extraction of contextual information through large-kernel convolution. A convolution kernel selection mechanism was also introduced to dynamically select the convolution kernel of the appropriate receptive field, suppress irrelevant background information, and improve segmentation accuracy. Ablation studies on the Vaihingen and Potsdam datasets demonstrate that our decoder significantly outperforms the baseline in terms of mean intersection over union and mean F1 score, achieving an increase of up to 1.73% and 1.17%, respectively, compared with the baseline. In quantitative comparisons, the accuracy of our improved decoder also surpasses other algorithms in the majority of categories. The results of this paper indicate that our improved decoder achieves significant performance improvement compared with the old decoder in remote-sensing image-segmentation tasks, which verifies its application potential in the field of land-cover segmentation. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The structure of multi-dilation and large-kernel convolution-based decoder.</p>
Full article ">Figure 2
<p>The structure of Multi-Dilation Rate Convolutional Fusion Module.</p>
Full article ">Figure 3
<p>The structure of Multi-Dilation Rate Convolutional Fusion Decoder.</p>
Full article ">Figure 4
<p>The structure of Large-Kernel-Selection Hybrid Attention Module.</p>
Full article ">Figure 5
<p>The structure of kernel selection.</p>
Full article ">Figure 6
<p>The configuration of Large-Kernel-Selection Spatial Attention Module.</p>
Full article ">Figure 7
<p>The structure of Large-Kernel Channel Attention Module.</p>
Full article ">Figure 8
<p>Structural diagram of the decoder improved by MDCFD and LKSHAM.</p>
Full article ">Figure 9
<p>The feature map analysis of our decoder: The feature maps and segmentation outcomes of two images in the dataset after processing with LKSHAM and MDCFD.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>h</b>) Segmentation results comparison before and after the introduction of MDCFD. (<b>a</b>,<b>e</b>) The original images in the test set. (<b>b</b>,<b>f</b>) Ground truth of original images. (<b>c</b>,<b>g</b>) The segmentation results without MDCFD. (<b>d</b>,<b>h</b>) The segmentation results with MDCFD.</p>
Full article ">Figure 11
<p>(<b>a</b>–<b>p</b>) Comparative visualization of model segmentation efficacy pre- and post-enhancement on incorrectly classifies. (<b>a</b>,<b>e</b>,<b>i</b>,<b>m</b>) The original images in the test set, (<b>b</b>,<b>f</b>,<b>j</b>,<b>n</b>) Ground truth of original images. (<b>c</b>,<b>g</b>,<b>k</b>,<b>o</b>) The segmentation results of baseline. (<b>d</b>,<b>h</b>,<b>l</b>,<b>p</b>) The segmentation results of Ours.</p>
Full article ">Figure 12
<p>(<b>a</b>–<b>l</b>) Comparative visualization of model segmentation efficacy pre- and post-enhancement. (<b>a</b>,<b>e</b>,<b>i</b>) The original images in the test set, (<b>b</b>,<b>f</b>,<b>j</b>) Ground truth of original images. (<b>c</b>,<b>g</b>,<b>k</b>) The seg-mentation results of baseline. (<b>d</b>,<b>h</b>,<b>l</b>) The segmentation results of Ours.</p>
Full article ">
21 pages, 16351 KiB  
Article
Fine-Scale Quantification of the Effect of Maize Tassel on Canopy Reflectance with 3D Radiative Transfer Modeling
by Youyi Jiang, Zhida Cheng, Guijun Yang, Dan Zhao, Chengjian Zhang, Bo Xu, Haikuan Feng, Ziheng Feng, Lipeng Ren, Yuan Zhang and Hao Yang
Remote Sens. 2024, 16(15), 2721; https://doi.org/10.3390/rs16152721 - 25 Jul 2024
Viewed by 842
Abstract
Quantifying the effect of maize tassel on canopy reflectance is essential for creating a tasseling progress monitoring index, aiding precision agriculture monitoring, and understanding vegetation canopy radiative transfer. Traditional field measurements often struggle to detect the subtle reflectance differences caused by tassels due [...] Read more.
Quantifying the effect of maize tassel on canopy reflectance is essential for creating a tasseling progress monitoring index, aiding precision agriculture monitoring, and understanding vegetation canopy radiative transfer. Traditional field measurements often struggle to detect the subtle reflectance differences caused by tassels due to complex environmental factors and challenges in controlling variables. The three-dimensional (3D) radiative transfer model offers a reliable method to study this relationship by accurately simulating interactions between solar radiation and canopy structure. This study used the LESS (large-scale remote sensing data and image simulation framework) model to analyze the impact of maize tassels on visible and near-infrared reflectance in heterogeneous 3D scenes by modifying the structural and optical properties of canopy components. We also examined the anisotropic characteristics of tassel effects on canopy reflectance and explored the mechanisms behind these effects based on the quantified contributions of the optical properties of canopy components. The results showed that (1) the effect of tassels under different planting densities mainly manifests in the near-infrared band of the canopy spectrum, with a variation magnitude of ±0.04. In contrast, the impact of tassels on different leaf area index (LAI) shows a smaller response difference, with a magnitude of ±0.01. As tassels change from green to gray during growth, their effect on reducing canopy reflectance increases. (2) The effect of maize tassel on canopy reflectance varied with spectral bands and showed an obvious directional effect. In the red band at the same sun position, the difference in tassel effect caused by the observed zenith angle on canopy reflectance reaches 200%, while in the near-infrared band, the difference is as high as 400%. The hotspot effect of the canopy has a significant weakening effect on the shadow effect of the tassel. (3) The non-transmittance optical properties of maize tassels reduce canopy reflectance, while their high reflectance increases it. Thus, the dual effects of tassels create a game in canopy reflectance, with the final outcome mainly depending on the sensitivity of the canopy spectrum to transmittance. This study demonstrates the potential of using 3D radiative transfer models to quantify the effects of crop fine structure on canopy reflectance and provides some insights for optimizing crop structure and implementing precision agriculture management (such as selective breeding of crop optimal plant type). Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the study area. (<b>a</b>) Location of the study area; (<b>b</b>) UAV image of the experimental spot.</p>
Full article ">Figure 2
<p>Observation instrument and artificial tassel cutting test site. (<b>a</b>) DJI P4 Multispectral; (<b>b</b>) ASD FieldSpec 4 Hi-Res; (<b>c</b>) maize trial plot.</p>
Full article ">Figure 3
<p>Data acquisition process of maize stem and leaf structure. (<b>a</b>) Artec Leo Scanner; (<b>b</b>) indoor scanning; (<b>c</b>) 3D structure of maize.</p>
Full article ">Figure 4
<p>Schematic diagram of 3D phenotypic measurement system for maize tassel.</p>
Full article ">Figure 5
<p>3D reconstruction of maize tassel.</p>
Full article ">Figure 6
<p>Maize tassel reflectance measurement process. (<b>a</b>) Maize tassels at different growth stages; (<b>b</b>) measurement scenarios.</p>
Full article ">Figure 7
<p>Comparison of measured and simulated values of canopy reflectance in tassel cutting experiment. In this figure, the vertical axis label Difference of reflectance represents the reflectance difference between the with and without tassel canopy (same as below).</p>
Full article ">Figure 8
<p>3D reconstruction results of maize tassel. (<b>a</b>) Compact type; (<b>b</b>) fewer tassel branches type; (<b>c</b>) loose type.</p>
Full article ">Figure 9
<p>Canopy reflectance difference caused by different tassel structures. (<b>a</b>) Vertical observation; (<b>b</b>) hotspot direction; (<b>c</b>) dark spot direction.</p>
Full article ">Figure 10
<p>Canopy reflectance difference caused by tassels at different growth stages. (<b>a</b>) Input spectra of the LESS model; (<b>b</b>) vertical observation; (<b>c</b>) hotspot direction; (<b>d</b>) dark spot direction.</p>
Full article ">Figure 11
<p>3D scene top view of different planting densities. (<b>a</b>) 60,000 plants/hm<sup>2</sup>; (<b>b</b>) 75,000 plants/hm<sup>2</sup>; (<b>c</b>) 90,000 plants/hm<sup>2</sup>; (<b>d</b>) 120,000 plants/hm<sup>2</sup>.</p>
Full article ">Figure 12
<p>The reflectance difference between the with and without tassel canopy under different planting densities. (<b>a</b>) Vertical observation; (<b>b</b>) hotspot direction.</p>
Full article ">Figure 13
<p>The difference of maximum tassel effect under different planting densities. (<b>a</b>) Visible band; (<b>b</b>) near-infrared band.</p>
Full article ">Figure 14
<p>The difference in tassel effect under different planting densities. (<b>a</b>) LAI = 5; (<b>b</b>) LAI = 4.5; (<b>c</b>) LAI = 4; (<b>d</b>); LAI = 3.5; (<b>e</b>) LAI = 3.</p>
Full article ">Figure 15
<p>Difference analysis of canopy reflectance under different LAI. (<b>a</b>,<b>b</b>) are the changes in canopy reflectance without a tassel; (<b>c</b>,<b>d</b>) are the differences in canopy reflectance caused by tassels.</p>
Full article ">Figure 16
<p>Polar plot of canopy reflectance for maize without tassel @695 nm and 775 nm, and the three sun positions considered. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">θ</mi> </mrow> <mrow> <mi mathvariant="bold-italic">s</mi> </mrow> </msub> </mrow> </semantics></math> represents the zenith angle of the sun, (<b>a</b>,<b>d</b>) are 13:50; (<b>b</b>,<b>e</b>) are 15:20; (<b>c</b>,<b>f</b>) are 16:30. The black cross marker represents the sun’s position. The black dashed line is row direction (east-west).</p>
Full article ">Figure 17
<p>Polar representation of the reflectance difference distribution of the non-tassel canopy in the symmetrical direction of the principal plane at different sun positions. R represents the hemisphere where the red band is located, and N represents the hemisphere where the near-infrared band is located. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">θ</mi> </mrow> <mrow> <mi mathvariant="bold-italic">s</mi> </mrow> </msub> </mrow> </semantics></math> represents the zenith angle of the sun, (<b>a</b>) is 13:50; (<b>b</b>) is 15:20; (<b>c</b>) is 16:30. The projection line of the principal plane of the sun is represented by a white solid line. The black dashed line is row direction (east-west).</p>
Full article ">Figure 18
<p>Directional distribution of the reflectance difference between the with and without tassel canopy. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">θ</mi> </mrow> <mrow> <mi mathvariant="bold-italic">s</mi> </mrow> </msub> </mrow> </semantics></math> represents the zenith angle of the sun, (<b>a</b>,<b>d</b>) are 13:50; (<b>b</b>,<b>e</b>) are 15:20; (<b>c</b>,<b>f</b>) are 16:30. The black cross marker represents the sun’s position. The black dashed line is row direction (east-west).</p>
Full article ">Figure 19
<p>Comparison of the effects of leaf optical properties on the reflectance of the non-tassel canopy. (<b>a</b>) Comparison of Canopy Reflectance Change; (<b>b</b>) percentage reduction of canopy reflectance. In the figure, T represents the transmittance, and BR represents the back reflectance (Same as below).</p>
Full article ">Figure 20
<p>Comparison of the effects of the reflectance of NTAB-leaves on the reflectance of the non-tassel canopy. In the figure, FR represents the front reflectance.</p>
Full article ">Figure 21
<p>Comparison of the effects of tassels with transmittance on canopy reflectance.</p>
Full article ">
Back to TopTop