[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (266)

Search Parameters:
Keywords = leaf area index retrieval

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 7131 KiB  
Article
Soil Moisture Retrieval in the Northeast China Plain’s Agricultural Fields Using Single-Temporal L-Band SAR and the Coupled MWCM-Oh Model
by Zhe Dong, Maofang Gao and Arnon Karnieli
Remote Sens. 2025, 17(3), 478; https://doi.org/10.3390/rs17030478 - 30 Jan 2025
Viewed by 485
Abstract
Timely access to soil moisture distribution is critical for agricultural production. As an in-orbit L-band synthetic aperture radar (SAR), SAOCOM offers high penetration and full polarization, making it suitable for agricultural soil moisture estimation. In this study, based on the single-temporal coupled water [...] Read more.
Timely access to soil moisture distribution is critical for agricultural production. As an in-orbit L-band synthetic aperture radar (SAR), SAOCOM offers high penetration and full polarization, making it suitable for agricultural soil moisture estimation. In this study, based on the single-temporal coupled water cloud model (WCM) and Oh model, we first modified the WCM (MWCM) to incorporate bare soil effects on backscattering using SAR data, enhancing the scattering representation during crop growth. Additionally, the Oh model was revised to enable retrieval of both the surface layer (0–5 cm) and underlying layer (5–10 cm) soil moisture. SAOCOM data from 19 June 2022, and 23 June 2023 in Bei’an City, China, along with Sentinel-2 imagery from the same dates, were used to validate the coupled MWCM-Oh model individually. The enhanced vegetation index (EVI), normalized difference vegetation index (NDVI), and leaf area index (LAI), together with the radar vegetation index (RVI) served as vegetation descriptions. Results showed that surface soil moisture estimates were more accurate than those for the underlying layer. LAI performed best for surface moisture (RMSE = 0.045), closely followed by RVI (RMSE = 0.053). For underlying layer soil moisture, RVI provided the most accurate retrieval (RMSE = 0.038), while LAI, EVI, and NDVI tended to overestimate. Overall, LAI and RVI effectively capture surface soil moisture, and RVI is particularly suitable for underlying layers, enabling more comprehensive monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area: (<b>a</b>) China; (<b>b</b>) land use of Heilongjiang Province; (<b>c</b>) distribution of points in the study area (2023).</p>
Full article ">Figure 2
<p>Scattering mechanism of vegetation pre-growth on the pixel scale; θ represents the incidence angle, while the red arrow denotes the incident radar wave. (<b>a</b>) Backscattering from the vegetation canopy (green arrow); (<b>b</b>) backscattering from the soil through the vegetation (light green arrow); and (<b>c</b>) backscattering from bare soil (brown arrow).</p>
Full article ">Figure 3
<p>Schematic illustration of the soil moisture retrieval process. CAL and VAL in Part C represent calibration and validation, respectively. Steps indicated by circled numbers represent key parameters: 1—Vegetation water content, 2—Surface roughness, 3—Dielectric constant, etc. The direction of the arrows indicates whether the parameter is an input (incoming arrows) or an output (outgoing arrows) at each step.</p>
Full article ">Figure 4
<p>Spatial search example using a 2 km radius search window. The elements d(u, i) represents the Euclidean distance between validation points and modeling points.</p>
Full article ">Figure 5
<p>The relationship between measured VWC and vegetation indices, ** represents a highly significant correlation, while the dotted line represents the correlation function between measured VWC and vegetation indices.</p>
Full article ">Figure 6
<p>Spatial distribution of VWC for different vegetation index retrievals.</p>
Full article ">Figure 7
<p>Frequency distribution of inverse vegetation water content for each index.</p>
Full article ">Figure 8
<p>Calculation of root-mean-square height of (<b>a</b>) surface roughness (s) and (<b>b</b>) underlying “roughness” (αs) for different indices. The red line indicates the median value, while the blue square represents the mean value.</p>
Full article ">Figure 9
<p>Spatial distributions and frequency comparisons of surface and underlying soil moisture retrieved with the MWCM-Oh model using various vegetation indices. (<b>a</b>), (<b>d</b>), (<b>g</b>), and (<b>j</b>) show surface soil moisture derived from EVI, NDVI, LAI, and RVI, respectively. (<b>b</b>), (<b>e</b>), (<b>h</b>), and (<b>k</b>) present the corresponding underlying soil moisture, while (<b>c</b>), (<b>f</b>), (<b>i</b>), and (<b>l</b>) display the frequency distributions comparing the two retrievals for each index.</p>
Full article ">Figure 10
<p>Scatter plot comparing predicted and measured soil moisture at two depths (0–5 cm and 5–10 cm) for different vegetation indices as model input: (<b>a</b>) EVI, (<b>b</b>) NDVI, (<b>c</b>) LAI, and (<b>d</b>) RVI..</p>
Full article ">Figure 11
<p>Taylor diagram comparing model accuracy using different vegetation indices as inputs. Symbols represent indices (circle: EVI, triangle: NDVI, square: LAI, hexagon: RVI), with green for surface soil moisture (0–5 cm) and red for underlying soil moisture (5–10 cm). Closer symbols to the black curve and origin indicate better performance.</p>
Full article ">Figure 12
<p>Distribution of α; α represents the ratio of underlying roughness to surface roughness at the same point. The red dotted line at α = 1.0 represents the theoretical maximum.</p>
Full article ">
23 pages, 7403 KiB  
Article
Integrating Drone-Based LiDAR and Multispectral Data for Tree Monitoring
by Beatrice Savinelli, Giulia Tagliabue, Luigi Vignali, Roberto Garzonio, Rodolfo Gentili, Cinzia Panigada and Micol Rossini
Drones 2024, 8(12), 744; https://doi.org/10.3390/drones8120744 - 10 Dec 2024
Cited by 1 | Viewed by 1059
Abstract
Forests are critical for providing ecosystem services and contributing to human well-being, but their health and extent are threatened by climate change, requiring effective monitoring systems. Traditional field-based methods are often labour-intensive, costly, and logistically challenging, limiting their use for large-scale applications. Drones [...] Read more.
Forests are critical for providing ecosystem services and contributing to human well-being, but their health and extent are threatened by climate change, requiring effective monitoring systems. Traditional field-based methods are often labour-intensive, costly, and logistically challenging, limiting their use for large-scale applications. Drones offer advantages such as low operating costs, versatility, and rapid data collection. However, challenges remain in optimising data processing and methods to effectively integrate the acquired data for forest monitoring. This study addresses this challenge by integrating drone-based LiDAR and multispectral data for forest species classification and health monitoring. We developed the methodology in Ticino Park (Italy), where intensive field campaigns were conducted in 2022 to collect tree species compositions, the leaf area index (LAI), canopy chlorophyll content (CCC), and drone data. Individual trees were first extracted from LiDAR data and classified using spectral and textural features derived from the multispectral data, achieving an accuracy of 84%. Key forest traits were then retrieved from the multispectral data using machine learning regression algorithms, which showed satisfactory performance in estimating the LAI (R2 = 0.83, RMSE = 0.44 m2 m−2) and CCC (R2 = 0.80, RMSE = 0.33 g m−2). The retrieved traits were used to track species-specific changes related to drought. The results obtained highlight the potential of integrating drone-based LiDAR and multispectral data for cost-effective and accurate forest health monitoring and change detection. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) RGB image of the “La Fagiana” nature reserve. The red dots indicate the centre of the sites (15 m × 15 m) where the plant traits were sampled, and the yellow dots are the centre of the validation sites (30 m × 30 m) for the individual tree detection. The shaded areas indicate the three main forest areas classified according to the microclimatic condition of the forest: meso-hygrophilic (green), mesophilic (yellow), and xerophilic (red). The Google satellite image of the area in grey scale is used as the basemap. (<b>b</b>) The extension of Ticino Park in Northern Italy (green polygon) and the location of the Fagiana area (red polygon).</p>
Full article ">Figure 2
<p>Illustration of the LiDAR data processing workflow. DTM = digital terrain model; ITD = individual tree detection.</p>
Full article ">Figure 3
<p>(<b>a</b>) Hyperspectral reflectance spectra collected by the PRISMA satellite in correspondence of the sampling sites where the field data were collected (n = 50); (<b>b</b>) PRISMA spectra resampled to the MAIA S2 spectral bands and used for the training of the machine learning regression algorithms (n = 50).</p>
Full article ">Figure 4
<p>Drone-based classification of the tree species in the Fagiana Forest obtained from the MAIA S2 multispectral sensor using a random forest classifier. The Google satellite image of the area in grey scale is used as the basemap.</p>
Full article ">Figure 5
<p>Scatter plots showing the measured and estimated leaf area index (LAI) values obtained from the MAIA S2 sensor with different machine learning regression algorithms: (<b>a</b>) Gaussian processes regression (GPR); (<b>b</b>) support vector regression (SVR); (<b>c</b>) partial least squares regression (PLSR); (<b>d</b>) neural network (NN); and (<b>e</b>) random forest (RF). The grey shaded areas indicate the confidence intervals (0.95) of the regression lines (solid lines) using reduced major axis (RMA) regression. The dotted line represents the 1:1 line.</p>
Full article ">Figure 6
<p>Scatter plots showing the measured and estimated canopy chlorophyll content (CCC) values obtained from the MAIA S2 sensor with different machine learning regression algorithms: (<b>a</b>) Gaussian processes regression (GPR); (<b>b</b>) support vector regression (SVR); (<b>c</b>) partial least squares regression (PLSR); (<b>d</b>) neural network (NN); and (<b>e</b>) random forest (RF). The grey shaded areas indicate the confidence intervals (0.95) of the regression lines (solid lines) using reduced major axis (RMA) regression. The dotted line represents the 1:1 line.</p>
Full article ">Figure 7
<p>Drone-based maps obtained from the MAIA S2 sensor using machine learning regression algorithms: (<b>a</b>,<b>b</b>) maps of the leaf area index (LAI) and canopy chlorophyll content (CCC) obtained from drone images collected on 1 July 2022; (<b>c</b>,<b>d</b>) maps of the LAI and CCC obtained from drone images collected on 31 August 2022; and (<b>e</b>,<b>f</b>) maps of the delta LAI and CCC obtained as the difference between the LAI and CCC values retrieved from the drone images collected on 31 August 2022 and 1 July 2022.</p>
Full article ">Figure 8
<p>Boxplot of LAI against retrieval day (<b>a</b>) and forest microclimatic conditions (<b>b</b>). Different lowercase letters indicate statistically significant differences, while equal lowercase letters indicate no statistically significant difference.</p>
Full article ">Figure 9
<p>Boxplot of the CCC against retrieval day (<b>a</b>) and forest type (<b>b</b>). Different lowercase letters indicate statistical differences, while equal lowercase letters indicate no statistical difference.</p>
Full article ">
25 pages, 8293 KiB  
Article
Estimating Grassland Biophysical Parameters in the Cantabrian Mountains Using Radiative Transfer Models in Combination with Multiple Endmember Spectral Mixture Analysis
by José Manuel Fernández-Guisuraga, Iván González-Pérez, Ana Reguero-Vaquero and Elena Marcos
Remote Sens. 2024, 16(23), 4547; https://doi.org/10.3390/rs16234547 - 4 Dec 2024
Cited by 1 | Viewed by 557
Abstract
Grasslands are one of the most abundant and biodiverse ecosystems in the world. However, in southern European countries, the abandonment of traditional management activities, such as extensive grazing, has caused many semi-natural grasslands to be invaded by shrubs. Therefore, there is a need [...] Read more.
Grasslands are one of the most abundant and biodiverse ecosystems in the world. However, in southern European countries, the abandonment of traditional management activities, such as extensive grazing, has caused many semi-natural grasslands to be invaded by shrubs. Therefore, there is a need to characterize semi-natural grasslands to determine their aboveground primary production and livestock-carrying capacity. Nevertheless, current methods lack a realistic identification of vegetation assemblages where grassland biophysical parameters can be accurately retrieved by the inversion of turbid-medium radiative transfer models (RTMs) in fine-grained landscapes. To this end, in this study we proposed a novel framework in which multiple endmember spectral mixture analysis (MESMA) was implemented to realistically identify grassland-dominated pixels from Sentinel-2 imagery in heterogeneous mountain landscapes. Then, the inversion of PROSAIL RTM (coupled PROSPECT and SAIL leaf and canopy models) was implemented separately for retrieving grassland biophysical parameters, including the leaf area index (LAI), fractional vegetation cover (FCOVER), and aboveground biomass (AGB), from grassland-dominated Sentinel-2 pixels while accounting for non-vegetated areas at the subpixel level. The study region was the southern slope of the Cantabrian Mountains (Spain), with a high spatial variability of fine-grained land covers. The MESMA grassland fraction image had a high accuracy based on validation results using centimetric resolution aerial orthophotographs (R2 = 0.74, and RMSE = 0.18). The validation with field reference data from several mountain passes of the southern slope of the Cantabrian Mountains featured a high accuracy for LAI (R2 = 0.74, and RMSE = 0.56 m2·m−2), FCOVER (R2 = 0.78 and RMSE = 0.07), and AGB (R2 = 0.67, and RMSE = 43.44 g·m−2). This study provides a reliable method to accurately identify and estimate grassland biophysical variables in highly diverse landscapes at a regional scale, with important implications for the management and conservation of threatened semi-natural grasslands. Future studies should investigate the PROSAIL inversion over the endmember signatures and subpixel fractions depicted by MESMA to adequately address the parametrization of the underlying background reflectance by using prior information and should also explore the scalability of this approach to other heterogeneous landscapes. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Location of the study region in western Europe (<b>A</b>) and southern slope of the Cantabrian Mountains with the location of the mountain passes sampled (<b>B</b>). The coordinate reference system is EPS.G:4326.</p>
Full article ">Figure 2
<p>Detail of the mountain passes sampled: Las Pintas (<b>A</b>), Vegarada (<b>B</b>), and San Isidro (<b>C</b>). Red dots depict the center of each sampling plot of 20 m × 20 m. Background images correspond to orthophotographs from the Spanish Aerial Orthophotography National Plan (PNOA, 25 cm resolution). The coordinate reference system is EP.SG:25830.</p>
Full article ">Figure 3
<p>Workflow of the methodology used in this study.</p>
Full article ">Figure 4
<p>Mean spectral signature for each Level-3 endmember extracted from Sentinel-2 L2A data.</p>
Full article ">Figure 5
<p>Relationship between modeled (multiple endmember spectral mixture analysis; MESMA) and reference (orthophoto) fraction of grass vegetation measured across the vicinity of Las Pintas, San Isidro, and Vegarada mountain passes (n = 50). The solid red line corresponds to the linear model fit. The dashed black line is the 1:1 line.</p>
Full article ">Figure 6
<p>Classification map computed from multiple endmember spectral mixture analysis (MESMA) fraction images (hard-classified to the label with a fraction value above 70%) across the vicinity of Las Pintas, San Isidro, and Vegarada mountain passes, as well as for the southern slope of the Cantabrian Mountains. Non-vegetated areas comprise land cover types depicted in <a href="#remotesensing-16-04547-t001" class="html-table">Table 1</a>. CRS: EPSG:32630.</p>
Full article ">Figure 7
<p>Results of the Sobol global sensitivity analysis (GSA) from the initial PROSPECT-5b and 4SAIL parametrization following uniform distributions assessed by the total Sobol index (SI; %).</p>
Full article ">Figure 8
<p>Spectral deviation, in reflectance units, between Sentinel-2 observed reflectance and simulated PROSAIL reflectance in the field plots of the Las Pintas, San Isidro, and Vegarada mountain passes.</p>
Full article ">Figure 9
<p>Relationship of field-measured leaf area index (LAI), fractional vegetation cover (FCOVER), and aboveground biomass (AGB) with PROSAIL-5B and normalized difference vegetation index (NDVI) retrievals. The solid red line corresponds to the linear model fit. The dashed black line is the 1:1 line.</p>
Full article ">Figure 10
<p>Leaf area index (LAI), fractional vegetation cover (FCOVER), and aboveground biomass (AGB) estimates retrieved from Sentinel-2 Level-2A scenes through the PROSAIL-5B radiative transfer model (RTM) across the vicinity of the Las Pintas, San Isidro, and Vegarada mountain passes. The white pixels represent non-grassland land cover as determined by multiple endmember spectral mixture analysis (MESMA) conditions, i.e., pixels with a grassland fraction value below 70%.</p>
Full article ">Figure 11
<p>Leaf area index (LAI), fractional vegetation cover (FCOVER), and aboveground biomass (AGB) estimates retrieved from Sentinel-2 Level-2A scenes through the PROSAIL-5B radiative transfer model (RTM) across the southern slope of the Cantabrian Mountains. The white pixels represent non-grassland land cover as determined by multiple endmember spectral mixture analysis (MESMA) conditions, i.e., pixels with a grassland fraction value below 70%.</p>
Full article ">Figure 12
<p>Results of K-means clustering analysis depicting the spatial variability of the leaf area index (LAI), fractional vegetation cover (FCOVER), and aboveground biomass (AGB) estimates in the Las Pintas mountain pass. We show the mean value of the three biophysical variables for each cluster.</p>
Full article ">
20 pages, 7208 KiB  
Article
Combining UAV Multispectral Imaging and PROSAIL Model to Estimate LAI of Potato at Plot Scale
by Shuang Li, Yongxin Lin, Ping Zhu, Liping Jin, Chunsong Bian and Jiangang Liu
Agriculture 2024, 14(12), 2159; https://doi.org/10.3390/agriculture14122159 - 27 Nov 2024
Viewed by 732
Abstract
Accurate and rapid estimation of the leaf area index (LAI) is essential for assessing crop growth and nutritional status, guiding farm management, and providing valuable phenotyping data for plant breeding. Compared to the tedious and time-consuming manual measurements of the LAI, remote sensing [...] Read more.
Accurate and rapid estimation of the leaf area index (LAI) is essential for assessing crop growth and nutritional status, guiding farm management, and providing valuable phenotyping data for plant breeding. Compared to the tedious and time-consuming manual measurements of the LAI, remote sensing has emerged as a valuable tool for rapid and accurate estimation of the LAI; however, the empirical inversion modeling methods face challenges of low efficiency for actual LAI measurements and poor model interpretability. The integration of radiative transfer models (RTMs) can overcome these problems to some extent. The aim of this study was to explore the potential of combining the PROSAIL model with high-resolution unmanned aerial vehicle (UAV) multispectral imaging to estimate the LAI across different growth stages at the plot scale. In this study, four inversion strategies for estimating the LAI were tested. Firstly, two types of lookup tables (LUTs) were built to estimate potato LAI of different varieties across different growth stages. Specifically, LUT1 was based on band reflectance, and LUT2 was based on vegetation index. Secondly, the hybrid models combining LUTs generated by PROSAIL and two machine learning algorithms (random forest (RF), Partial Least Squares Regression (PLSR)) are built to estimate potato LAI. The determination of coefficient (R2) of models for estimating LAI by LUTs ranged from 0.24 to 0.64. The hybrid method that integrates UAV multispectral, PROSAIL, and machine learning significantly improved the accuracy of LAI estimation. Compared to the results based on LUT2, the hybrid model achieved higher accuracy with the R2 of the inversion model improved by 30% to 263%. The LAI retrieval model using the PROSAIL model and PLSR achieved an R2 as high as 0.87, while the R2 using the RF algorithm ranged from 0.33 to 0.81. The proposed hybrid model, integrated with UAV multispectral data, PROSAIL, and PLSR can achieve approximate accuracy compared with the empirical inversion models, which can alleviate the labor-intensive process of handheld LAI measurements for building inversion models. Thus, the hybrid approach provides a feasible and efficient strategy for estimating the LAI of potato varieties across different growth stages at the plot scale. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area and field plot distribution.</p>
Full article ">Figure 2
<p>Unmanned aerial vehicle multispectral image acquisition system.</p>
Full article ">Figure 3
<p>Spectral response function of the RedEdge-P multispectral sensor used.</p>
Full article ">Figure 4
<p>Leaf area photo background removal effect.</p>
Full article ">Figure 5
<p>Local sensitivity analysis of PROSAIL model parameters. Figures (<b>a</b>,<b>b</b>) show the variation in spectral reflectance at 400–2500 nm for 3 &lt; LAI &lt; 6 and 0 &lt; LAI &lt; 3; (<b>c</b>–<b>o</b>) show the effects of Cab, Car, Cm, Cw, Cbrown, hspot, ALA, N, skyl, psoil, tts, tto, and psi on the spectral reflectance at 400 nm–2500 nm, respectively.</p>
Full article ">Figure 6
<p>Global sensitivity analysis of the main PROSAIL model parameters: (<b>a</b>) the result of global sensitivity analysis for 0 &lt; LAI &lt; 3; (<b>b</b>) the result of global sensitivity analysis for 3 &lt; LAI &lt; 6.</p>
Full article ">Figure 7
<p>Results of LAI inversion using LUT1.</p>
Full article ">Figure 8
<p>LAI inversion results of potato varieties across all growth stages using four strategies. The above results are for the model validation set, and the hybrid model results are for model validation using measured data.</p>
Full article ">
23 pages, 9861 KiB  
Article
A Synergistic Framework for Coupling Crop Growth, Radiative Transfer, and Machine Learning to Estimate Wheat Crop Traits in Pakistan
by Rana Ahmad Faraz Ishaq, Guanhua Zhou, Aamir Ali, Syed Roshaan Ali Shah, Cheng Jiang, Zhongqi Ma, Kang Sun and Hongzhi Jiang
Remote Sens. 2024, 16(23), 4386; https://doi.org/10.3390/rs16234386 - 24 Nov 2024
Viewed by 1057
Abstract
The integration of the Crop Growth Model (CGM), Radiative Transfer Model (RTM), and Machine Learning Algorithm (MLA) for estimating crop traits represents a cutting-edge area of research. This integration requires in-depth study to address RTM limitations, particularly of similar spectral responses from multiple [...] Read more.
The integration of the Crop Growth Model (CGM), Radiative Transfer Model (RTM), and Machine Learning Algorithm (MLA) for estimating crop traits represents a cutting-edge area of research. This integration requires in-depth study to address RTM limitations, particularly of similar spectral responses from multiple input combinations. This study proposes the integration of CGM and RTM for crop trait retrieval and evaluates the performance of CGM output-based RTM spectra generation for multiple crop traits estimation without biased sampling using machine learning models. Moreover, PROSAIL spectra as training against Harmonized Landsat Sentinel-2 (HLS) as testing was also compared with HLS data only as an alternative. It was found that satellite data (HLS, 80:20) not only consistently performed better, but PROSAIL (train) and HLS (test) also had satisfactory results for multiple crop traits from uniform training samples in spite of differences in simulated and real data. PROSAIL-HLS has an RMSE of 0.67 for leaf area index (LAI), 5.66 µg/cm2 for chlorophyll ab (Cab), 0.0003 g/cm2 for dry matter content (Cm), and 0.002 g/cm2 for leaf water content (Cw) against the HLS only, with an RMSE of 0.40 for LAI, 3.28 µg/cm2 for Cab, 0.0002 g/cm2 for Cm, and 0.001 g/cm2 for Cw. Optimized machine learning models, namely Extreme Gradient Boost (XGBoost) for LAI, Support Vector Machine (SVM) for Cab, and Random Forest (RF) for Cm and Cw, were deployed for temporal mapping of traits to be used for wheat productivity enhancement. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

Figure 1
<p>Methodology flowchart.</p>
Full article ">Figure 2
<p>Location map.</p>
Full article ">Figure 3
<p>APSIM calibration for LAI.</p>
Full article ">Figure 4
<p>Scatterplots showing each model performance on Dataset-1 (PROSAIL-HLS) against each crop trait.</p>
Full article ">Figure 5
<p>Scatterplots showing each model performance on Dataset-2 (HLS only) against each crop trait.</p>
Full article ">Figure 6
<p>Temporal mapping of wheat crop traits.</p>
Full article ">Figure 7
<p>Reflectance differences and changes in traits over time (Abdul Sattar Village Massa Kota). (<b>a</b>) PROSAIL reflectance over time. (<b>b</b>) HLS reflectance over time. (<b>c</b>) LAI and Cab over time.</p>
Full article ">
18 pages, 16650 KiB  
Article
Mapping Seagrass Distribution and Abundance: Comparing Areal Cover and Biomass Estimates Between Space-Based and Airborne Imagery
by Victoria J. Hill, Richard C. Zimmerman, Dorothy A. Byron and Kenneth L. Heck
Remote Sens. 2024, 16(23), 4351; https://doi.org/10.3390/rs16234351 - 21 Nov 2024
Viewed by 781
Abstract
This study evaluated the effectiveness of Planet satellite imagery in mapping seagrass coverage in Santa Rosa Sound, Florida. We compared very-high-resolution aerial imagery (0.3 m) collected in September 2022 with high-resolution Planet imagery (~3 m) captured during the same period. Using supervised classification [...] Read more.
This study evaluated the effectiveness of Planet satellite imagery in mapping seagrass coverage in Santa Rosa Sound, Florida. We compared very-high-resolution aerial imagery (0.3 m) collected in September 2022 with high-resolution Planet imagery (~3 m) captured during the same period. Using supervised classification techniques, we accurately identified expansive, continuous seagrass meadows in the satellite images, successfully classifying 95.5% of the 11.18 km2 of seagrass area delineated manually from the aerial imagery. Our analysis utilized an occurrence frequency (OF) product, which was generated by processing ten clear-sky images collected between 8 and 25 September 2022 to determine the frequency with which each pixel was classified as seagrass. Seagrass patches encompassing at least nine pixels (~200 m2) were almost always detected by our classification algorithm. Using an OF threshold equal to or greater than >60% provided a high level of confidence in seagrass presence while effectively reducing the impact of small misclassifications, often of individual pixels, that appeared sporadically in individual images. The image-to-image uncertainty in seagrass retrieval from the satellite images was 0.1 km2 or 2.3%, reflecting the robustness of our classification method and allowing confidence in the accuracy of the seagrass area estimate. The satellite-retrieved leaf area index (LAI) was consistent with previous in situ measurements, leading to the estimate that 2700 tons of carbon per year are produced by the Santa Rosa Sound seagrass ecosystem, equivalent to a drawdown of approximately 10,070 tons of CO2. This satellite-based approach offers a cost-effective, semi-automated, and scalable method of assessing the distribution and abundance of submerged aquatic vegetation that provides numerous ecosystem services. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>). The location of Pensacola Bay is indicated by the red box. (<b>B</b>). The location of Santa Rosa Sound is indicated by the red outline. Underlying Ocean basemap from Esri.ArcGIS Pro 3.3.2. Sources: Esri.Arc, GEBCO, NOAA, National Geographic, DeLorme, HERE, Geonames.org, and other contributors.</p>
Full article ">Figure 2
<p>Flowchart outlining the processing steps for satellite and aerial imagery.</p>
Full article ">Figure 3
<p>Aerial imagery with areas identified as containing seagrass overlaid as polygons (red). The white-dashed box is the location of image overlap used in uncertainty estimates; solid white boxes numbered 1 through 5 are the locations of examples shown later figures. Green dots highlight the locations of East Sabine and Big Sabine Point, mentioned later in the text.</p>
Full article ">Figure 4
<p>Seagrass area polygons derived from aerial imagery (black lines), and Planet-identified seagrass using <span class="html-italic">OF</span> thresholds of ≥60% and ≥90% overlaid on aerial imagery. (<b>A</b>) Subset of aerial imagery highlighted as Box 3 in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>; (<b>B</b>) subset of aerial imagery highlighted as Box 5 in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>Proportion of false-negative area by polygon size (with <span class="html-italic">OF</span> ≥ 60%) for aerial polygons where zero Planet pixels were identified as seagrass.</p>
Full article ">Figure 6
<p>Previous (black) seagrass areal extent for Santa Rosa Sound based on historic data [<a href="#B32-remotesensing-16-04351" class="html-bibr">32</a>,<a href="#B33-remotesensing-16-04351" class="html-bibr">33</a>] and 2022 estimate (red) derived from this analysis. Historical area generated by setting the average patchy density at 50% and summing continuous + patchy × 0.5 areas provided in the literature.</p>
Full article ">Figure 7
<p>(<b>A</b>) RGB representation of a subset of aerial imagery showing the large continuous seagrass meadow at Big Sabine Point in the middle of Santa Rosa Sound (see <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>, box 1). (<b>B</b>) Seagrass <span class="html-italic">OF</span> derived from all satellite images overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Mean leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated for <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">Figure 8
<p>(<b>A</b>) RGB representation of a subset of aerial imagery showing a seagrass meadow along the north shore of Santa Rosa Sound. The location of this site is shown in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>, box 2. (<b>B</b>) Planet-derived <span class="html-italic">OF</span> overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated for <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">Figure 9
<p>(<b>A</b>). Subset of RGB aerial images just west of Navarre Bridge; the location of this site is shown in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a>, box 3. Seagrass meadows along the shore and in the middle of Santa Rosa Sound were obscured by suspended sediment plumes in the aerial image (<b>B</b>). Planet-derived <span class="html-italic">OF</span> overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated from <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">Figure 10
<p>(<b>A</b>) Subset of RGB aerial images showing an example of seagrass distribution over a shallow sand bank along the southern shore of Santa Rosa Sound. The location of this site is shown in <a href="#remotesensing-16-04351-f003" class="html-fig">Figure 3</a> box 4; the white arrow points to shallow sand with small seagrass patches. (<b>B</b>). Planet-derived <span class="html-italic">OF</span> overlaid on the aerial imagery from panel (<b>A</b>). (<b>C</b>) Leaf area index, above-ground biomass, and below-ground biomass overlaid on the aerial imagery from panel (<b>A</b>). (<b>D</b>) Polygons based on aerial imagery (dashed white lines) and polygons generated for <span class="html-italic">OF</span> ≥ 60% (dashed red lines) overlaid on Planet imagery.</p>
Full article ">
17 pages, 11054 KiB  
Article
Advanced Plant Phenotyping: Unmanned Aerial Vehicle Remote Sensing and CimageA Software Technology for Precision Crop Growth Monitoring
by Hongyu Fu, Jianning Lu, Guoxian Cui, Jihao Nie, Wei Wang, Wei She and Jinwei Li
Agronomy 2024, 14(11), 2534; https://doi.org/10.3390/agronomy14112534 - 28 Oct 2024
Viewed by 1215
Abstract
In production activities and breeding programs, large-scale investigations of crop high-throughput phenotype information are needed to help improve management and decision-making. The development of UAV (unmanned aerial vehicle) remote sensing technology provides a new means for the large-scale, efficient, and accurate acquisition of [...] Read more.
In production activities and breeding programs, large-scale investigations of crop high-throughput phenotype information are needed to help improve management and decision-making. The development of UAV (unmanned aerial vehicle) remote sensing technology provides a new means for the large-scale, efficient, and accurate acquisition of crop phenotypes, but its practical application and popularization are hindered due to the complicated data processing required. To date, there is no automated system that can utilize the canopy images acquired through UAV to conduct a phenotypic character analysis. To address this bottleneck, we developed a new scalable software called CimageA. CimageA uses crop canopy images obtained by UAV as materials. It can combine machine vision technology and machine learning technology to conduct the high-throughput processing and phenotyping of crop remote sensing data. First, zoning tools are applied to draw an area-of-interest (AOI). Then, CimageA can rapidly extract vital remote sensing information such as the color, texture, and spectrum of the crop canopy in the plots. In addition, we developed data analysis modules that estimate and quantify related phenotypes (such as leaf area index, canopy coverage, and plant height) by analyzing the association between measured crop phenotypes and CimageA-derived remote sensing eigenvalues. Through a series of experiments, we confirmed that CimageA performs well in extracting high-throughput remote sensing information regarding crops, and verified the reliability of retrieving LAI (R2 = 0.796) and estimating plant height (R2 = 0.989) and planting area using CimageA. In short, CimageA is an efficient and non-destructive tool for crop phenotype analysis, which is of great value for monitoring crop growth and guiding breeding decisions. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Plots’ distribution and multi-channel remote sensing images of ramie test area. Result_RGB is the highest-resolution visible image of the test area, Result_Blue shows the blue channel spectral reflectance image, Result_Green shows the green channel spectral reflectance image, Result_Red shows the red channel spectral reflectance image, Result_RedEdge shows the red edge channel spectral reflectance image and Result_Nir is the spectral reflectance image of the near infrared channel.</p>
Full article ">Figure 2
<p>User interface and instructions of CimageA. Numbers in the figure provide guidance: (1) File processing. (2) Image processing and parameter settings. (3) Data processing. (4) Data analysis. (5) Phenotypic visualization.</p>
Full article ">Figure 3
<p>The operation process of CimageA.</p>
Full article ">Figure 4
<p>Tilt correction of CimageA. (<b>A</b>) Image before tilt correction. (<b>B</b>) Image after tilt correction.</p>
Full article ">Figure 5
<p>Intelligent drawing of the AOI. (<b>A</b>) Plots divided by diagonal zoning. (<b>B</b>) Plots divided by hypotenuse zoning.</p>
Full article ">Figure 6
<p>The result of ramie plant segmentation based on ExR. (<b>A</b>) Original image. (<b>B</b>) Grayscale image in ExR channel. (<b>C</b>) The segmented image.</p>
Full article ">Figure 7
<p>Three ramie varieties with different leaf color. (<b>A</b>) Dazhu ramie. (<b>B</b>) Xiangzhu 7. (<b>C</b>) Changshun ramie.</p>
Full article ">Figure 8
<p>Extraction of ramie plant height based on UAV remote sensing images.</p>
Full article ">Figure 9
<p>Crop planting area measurement. Area 1 (red area) is the ramie planting area, area 2 (green area) is the jute planting area, area 3 (blue area) is the cabbage planting area, and area 4 includes all planting areas.</p>
Full article ">Figure 10
<p>Quantitative color features of the ramie with different leaf color classes.</p>
Full article ">Figure 11
<p>Quantitative leaf color evaluation of three ramie varieties.</p>
Full article ">Figure 12
<p>Correlation analysis between ramie LAI and remote sensing eigenvalues.</p>
Full article ">Figure 13
<p>Relationship between the measured plant height and the estimated plant height.</p>
Full article ">Figure 14
<p>Temporal changes of the estimated plant height.</p>
Full article ">Figure 15
<p>Precision of crop area extracted by CimageA. ** indicates a significant correlation.</p>
Full article ">
29 pages, 12094 KiB  
Article
Bitemporal Radiative Transfer Modeling Using Bitemporal 3D-Explicit Forest Reconstruction from Terrestrial Laser Scanning
by Chang Liu, Kim Calders, Niall Origo, Louise Terryn, Jennifer Adams, Jean-Philippe Gastellu-Etchegorry, Yingjie Wang, Félicien Meunier, John Armston, Mathias Disney, William Woodgate, Joanne Nightingale and Hans Verbeeck
Remote Sens. 2024, 16(19), 3639; https://doi.org/10.3390/rs16193639 - 29 Sep 2024
Viewed by 2104
Abstract
Radiative transfer models (RTMs) are often used to retrieve biophysical parameters from earth observation data. RTMs with multi-temporal and realistic forest representations enable radiative transfer (RT) modeling for real-world dynamic processes. To achieve more realistic RT modeling for dynamic forest processes, this study [...] Read more.
Radiative transfer models (RTMs) are often used to retrieve biophysical parameters from earth observation data. RTMs with multi-temporal and realistic forest representations enable radiative transfer (RT) modeling for real-world dynamic processes. To achieve more realistic RT modeling for dynamic forest processes, this study presents the 3D-explicit reconstruction of a typical temperate deciduous forest in 2015 and 2022. We demonstrate for the first time the potential use of bitemporal 3D-explicit RT modeling from terrestrial laser scanning on the forward modeling and quantitative interpretation of: (1) remote sensing (RS) observations of leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR), and canopy light extinction, and (2) the impact of canopy gap dynamics on light availability of explicit locations. Results showed that, compared to the 2015 scene, the hemispherical-directional reflectance factor (HDRF) of the 2022 forest scene relatively decreased by 3.8% and the leaf FAPAR relatively increased by 5.4%. At explicit locations where canopy gaps significantly changed between the 2015 scene and the 2022 scene, only under diffuse light did the branch damage and closing gap significantly impact ground light availability. This study provides the first bitemporal RT comparison based on the 3D RT modeling, which uses one of the most realistic bitemporal forest scenes as the structural input. This bitemporal 3D-explicit forest RT modeling allows spatially explicit modeling over time under fully controlled experimental conditions in one of the most realistic virtual environments, thus delivering a powerful tool for studying canopy light regimes as impacted by dynamics in forest structure and developing RS inversion schemes on forest structural changes. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Geographic location and map of Wytham Woods with plot indicated by ‘X’ [<a href="#B60-remotesensing-16-03639" class="html-bibr">60</a>].</p>
Full article ">Figure 2
<p>Spectral properties of different tree species in the plot [<a href="#B2-remotesensing-16-03639" class="html-bibr">2</a>,<a href="#B51-remotesensing-16-03639" class="html-bibr">51</a>]. (<b>a</b>) Reflectance and transmittance of leaves; (<b>b</b>) reflectance of bark.</p>
Full article ">Figure 3
<p>Locations of canopy gap dynamics and photosynthetically active radiation (PAR) sensors simulated, shown in the TLS point cloud (top view): (<b>a</b>) 2015; (<b>b</b>) 2022.</p>
Full article ">Figure 4
<p>Vertical profiles of different types of canopy gap dynamics observed by terrestrial laser scanning, and the position of simulated PAR sensors.</p>
Full article ">Figure 5
<p>Flowchart of research methodology. QSMs of woody structure were reconstructed using leaf-off TLS data.</p>
Full article ">Figure 6
<p>Segmented TLS leaf-off point cloud of 1-ha Wytham Woods forest stand (top view): (<b>a</b>) 2015; (<b>b</b>) and 2022. Each color represents an individual tree.</p>
Full article ">Figure 7
<p>The dynamic change of wood structure of a Common ash (<span class="html-italic">Fraxinus excelsior</span>) tree from 2015 to 2022. (<b>a</b>) 2015 leaf-off point cloud; (<b>b</b>) 2022 leaf-off point cloud.</p>
Full article ">Figure 8
<p>3D-explicit reconstruction of a Sycamore (<span class="html-italic">Acer pseudoplatanus</span>) tree. (<b>a</b>) TLS point cloud colored by height (leaf-off); (<b>b</b>) QSM overlaid with TLS leaf-off point cloud; (<b>c</b>) QSM, the modeled branch length was 3863.3 m; (<b>d</b>) Fully reconstructed tree: QSM + leaves, the leaf area assigned to this tree was 888.2 m<sup>2</sup>.</p>
Full article ">Figure 9
<p>The 3D-explicit models of the complete 1-ha Wytham Woods forest stand in (<b>a</b>) 2015 and (<b>b</b>) 2022. The different leaf colors represent the different tree species present in Wytham Woods. The stems and branches of all trees are shown in brown.</p>
Full article ">Figure 10
<p>The vertical profiles of simulated (<b>a</b>) light extinction, (<b>b</b>) light absorption, and (<b>c</b>) leaf area per meter of height in 2015 and 2022 forest scenes. The results of light extinction and absorption were based on the PAR band. The illumination zenith angle (IZA) was 38.4° and the illumination azimuth angle (IAA) was 125.2°.</p>
Full article ">Figure 11
<p>The vertical profiles of simulated (<b>a</b>) light extinction and (<b>b</b>) light absorption in the blue, green, red, and NIR bands for the 2015 and 2022 forest scenes. Illumination zenith angle (IZA) 38.4°, illumination azimuth angle (IAA) 125.2°.</p>
Full article ">Figure 12
<p>Simulated top of canopy images of Wytham Woods forest scenes in 2015 and 2022. The images were simulated under nadir viewing directions and Sentinel-2 RGB bands. IZA 38.4°, IAA 125.2°. (<b>a</b>,<b>b</b>) Ultra-high resolution images in 2015 and 2022 (spatial resolution: 1 cm); (<b>d</b>,<b>e</b>) 25 cm resolution images in 2015 and 2022; (<b>g</b>,<b>h</b>) 10 m resolution images in 2015 and 2022; (<b>c</b>,<b>f</b>,<b>i</b>) Spatial pattern of HDRF variation from 2015 to 2022 (red band).</p>
Full article ">Figure 13
<p>Light extinction profiles of downward PAR at location 1: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 14
<p>Light extinction profiles of downward PAR at location 2: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 15
<p>Light extinction profiles of downward PAR at location 3: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">Figure 16
<p>Light extinction profiles of downward PAR at location 4: (<b>a</b>) diffuse light; (<b>b</b>) midday direct light (IZA 28.4°, IAA 180°); (<b>c</b>) morning direct light (IZA 81.3°, IAA 27.3°). The X axis is the local light availability represented as the percentage of incident solar irradiance. The Y axis is the height from the simulated sensors to the ground. (<b>d</b>) The canopy gap dynamic at this location.</p>
Full article ">
21 pages, 9876 KiB  
Article
Estimation of Leaf Area Index across Biomes and Growth Stages Combining Multiple Vegetation Indices
by Fangyi Lv, Kaimin Sun, Wenzhuo Li, Shunxia Miao and Xiuqing Hu
Sensors 2024, 24(18), 6106; https://doi.org/10.3390/s24186106 - 21 Sep 2024
Cited by 1 | Viewed by 1283
Abstract
The leaf area index (LAI) is a key indicator of vegetation canopy structure and growth status, crucial for global ecological environment research. The Moderate Resolution Spectral Imager-II (MERSI-II) aboard Fengyun-3D (FY-3D) covers the globe twice daily, providing a reliable data source for large-scale [...] Read more.
The leaf area index (LAI) is a key indicator of vegetation canopy structure and growth status, crucial for global ecological environment research. The Moderate Resolution Spectral Imager-II (MERSI-II) aboard Fengyun-3D (FY-3D) covers the globe twice daily, providing a reliable data source for large-scale and high-frequency LAI estimation. VI-based LAI estimation is effective, but species and growth status impacts on the sensitivity of the VI–LAI relationship are rarely considered, especially for MERSI-II. This study analyzed the VI–LAI relationship for eight biomes in China with contrasting leaf structures and canopy architectures. The LAI was estimated by adaptively combining multiple VIs and validated using MODIS, GLASS, and ground measurements. Results show that (1) species and growth stages significantly affect VI–LAI sensitivity. For example, the EVI is optimal for broadleaf crops in winter, while the RDVI is best for evergreen needleleaf forests in summer. (2) Combining vegetation indices can significantly optimize sensitivity. The accuracy of multi-VI-based LAI retrieval is notably higher than using a single VI for the entire year. (3) MERSI-II shows good spatial–temporal consistency with MODIS and GLASS and is more sensitive to vegetation growth fluctuation. Direct validation with ground-truth data also demonstrates that the uncertainty of retrievals is acceptable (R2 = 0.808, RMSE = 0.642). Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Land cover map of the study area using MODIS MCD12Q1 product in 2020.</p>
Full article ">Figure 2
<p>Flowchart for MERSI-II LAI estimation.</p>
Full article ">Figure 3
<p>Time-series curves of VIs and LAI for grasses/cereal crops.</p>
Full article ">Figure 4
<p>Time-series curves of VIs and LAI for broadleaf crops.</p>
Full article ">Figure 5
<p>Time-series curves of VIs and LAI for savannahs.</p>
Full article ">Figure 6
<p>Time-series curves of VIs and LAI for deciduous broadleaf forests.</p>
Full article ">Figure 7
<p>VI–LAI density scatter plot of grasses/cereal crops: (<b>a</b>) DVI–LAI; (<b>b</b>) EVI–LAI; (<b>c</b>) MSR-LAI; (<b>d</b>) NDVI–LAI; (<b>e</b>) OSAVI–LAI; (<b>f</b>) RDVI–LAI; (<b>g</b>) RVI–LAI; (<b>h</b>) SAVI–LAI.</p>
Full article ">Figure 7 Cont.
<p>VI–LAI density scatter plot of grasses/cereal crops: (<b>a</b>) DVI–LAI; (<b>b</b>) EVI–LAI; (<b>c</b>) MSR-LAI; (<b>d</b>) NDVI–LAI; (<b>e</b>) OSAVI–LAI; (<b>f</b>) RDVI–LAI; (<b>g</b>) RVI–LAI; (<b>h</b>) SAVI–LAI.</p>
Full article ">Figure 8
<p>Optimal VIs for MERSI-II LAI estimation across different biomes and growth stages. Biomes 1–8 correspond to grasses/cereal crops, shrubs, broadleaf crops, savanna, EBF, DBF, ENF, and ENF, respectively. The seven colors represent seven different input parameters, respectively.</p>
Full article ">Figure 9
<p>Estimation results of different methods.</p>
Full article ">Figure 10
<p>Comparison of spatial distributions of LAI differences between MERSI-II, Aqua MODIS and GLASS in mainland China in 2020: (<b>a</b>) MERSI-II LAI minus MODIS LAI; (<b>b</b>) STD of the LAI differences between MERSI-II and MODIS; (<b>c</b>) MERSI-II LAI minus GLASS LAI; (<b>d</b>) STD of the LAI differences between MERSI-II and GLASS. No-data pixels in white color are observations contaminated by cloud, shadow, aerosol, etc. Gray pixels are non-vegetation areas.</p>
Full article ">Figure 11
<p>Bar chart for the proportion of different categories of LAI differences under each biome: (<b>a</b>) MERSI-II LAI minus MODIS LAI; (<b>b</b>) MERSI-II LAI minus GLASS LAI. Biomes 1–8 are grasses/cereal crops, shrubs, broadleaf crops, savanna, EBF, DBF, ENF, and ENF.</p>
Full article ">Figure 12
<p>Bar chart for the proportion of different biomes under each category of LAI difference: (<b>a</b>) MERSI-II LAI minus MODIS LAI; (<b>b</b>) MERSI-II LAI minus GLASS LAI. Biomes 1–8 are grasses/cereal crops, shrubs, broadleaf crops, savanna, EBF, DBF, ENF and ENF.</p>
Full article ">Figure 13
<p>Time series of MERSI, GLASS and MODIS (2020). The red circle indicates large fluctuations in MODIS LAI.</p>
Full article ">Figure 14
<p>Comparison between ground-truth data and FY-3D MERSI-II LAI. Biome 1 is grasses/cereal crops, Biome 2 is shrubs, Biome 4 is savanna, Biome 5 is evergreen broadleaf forests, and Biome 8 is deciduous needleleaf forests.</p>
Full article ">
40 pages, 6726 KiB  
Review
Remote Sensing Data Assimilation in Crop Growth Modeling from an Agricultural Perspective: New Insights on Challenges and Prospects
by Jun Wang, Yanlong Wang and Zhengyuan Qi
Agronomy 2024, 14(9), 1920; https://doi.org/10.3390/agronomy14091920 - 27 Aug 2024
Cited by 5 | Viewed by 3838
Abstract
The frequent occurrence of global climate change and natural disasters highlights the importance of precision agricultural monitoring, yield forecasting, and early warning systems. The data assimilation method provides a new possibility to solve the problems of low accuracy of yield prediction, strong dependence [...] Read more.
The frequent occurrence of global climate change and natural disasters highlights the importance of precision agricultural monitoring, yield forecasting, and early warning systems. The data assimilation method provides a new possibility to solve the problems of low accuracy of yield prediction, strong dependence on the field, and poor adaptability of the model in traditional agricultural applications. Therefore, this study makes a systematic literature retrieval based on Web of Science, Scopus, Google Scholar, and PubMed databases, introduces in detail the assimilation strategies based on many new remote sensing data sources, such as satellite constellation, UAV, ground observation stations, and mobile platforms, and compares and analyzes the progress of assimilation models such as compulsion method, model parameter method, state update method, and Bayesian paradigm method. The results show that: (1) the new remote sensing platform data assimilation shows significant advantages in precision agriculture, especially in emerging satellite constellation remote sensing and UAV data assimilation. (2) SWAP model is the most widely used in simulating crop growth, while Aquacrop, WOFOST, and APSIM models have great potential for application. (3) Sequential assimilation strategy is the most widely used algorithm in the field of agricultural data assimilation, especially the ensemble Kalman filter algorithm, and hierarchical Bayesian assimilation strategy is considered to be a promising method. (4) Leaf area index (LAI) is considered to be the most preferred assimilation variable, and the study of soil moisture (SM) and vegetation index (VIs) has also been strengthened. In addition, the quality, resolution, and applicability of assimilation data sources are the key bottlenecks that affect the application of data assimilation in the development of precision agriculture. In the future, the development of data assimilation models tends to be more refined, diversified, and integrated. To sum up, this study can provide a comprehensive reference for agricultural monitoring, yield prediction, and crop early warning by using the data assimilation model. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Crop Monitoring and Modelling)
Show Figures

Figure 1

Figure 1
<p>Application of crop model data assimilation in Precision Agriculture Distribution Statistics of Frequency variation with time.</p>
Full article ">Figure 2
<p>(<b>a</b>) changes in peer-reviewed publications over time, (<b>b</b>) the spatial geographical distribution of data assimilation studies. The color depth directly reflects the level of research activities and output in different regions.</p>
Full article ">Figure 3
<p>The development trends of different satellite sensors in detail, feature recognition, and scale requirements are compared. The red square represents the spatial resolution of adjacent RS images, modified from [<a href="#B70-agronomy-14-01920" class="html-bibr">70</a>].</p>
Full article ">Figure 4
<p>Sketch of crop model framework for remote sensing data assimilation. Note: The figure on the left is composed of remote sensing observations, crop models, and assimilation systems, and the figure on the right is the assimilation comparison between the regeneration assimilation process and the whole growth period. The green line indicates the addition of remote sensing observations, and the blue line indicates the addition of no remote sensing observations.</p>
Full article ">Figure 5
<p>The development trend and evolution process of DA model. The upper left part represents the Earth observation data (EO), including various types of sensors, and the output is the observed values of physical and chemical parameters or the results of pre-processing and inversion. The lower right part represents the crop growth model simulation process (CGM), including initializing various types of parameter inputs, model simulation, and output as crop parameter simulation values. The middle part is the data assimilation model assimilation process (DA), including the core mechanism of different models, assimilation process, and evolution trend.</p>
Full article ">Figure 6
<p>The assimilation strategy framework of yield prediction is driven directly by assimilation model based on crop canopy temperature as input. Notes: DBA is dry biomass accumulation, kg ha<sup>−1</sup>; FBA is fresh biomass accumulation, kg ha<sup>−1</sup>; RDBA is relative DBA; RFBA is relative FBA [<a href="#B121-agronomy-14-01920" class="html-bibr">121</a>].</p>
Full article ">Figure 7
<p>Strategy Framework of assimilation Model based on 4D-Var method. The left figure illustrates the process of simulating Grassland Aboveground Biomass (BM) and Leaf Area Index (LAI) with the ModVege grassland model. The right figure displays the distribution of BM and LAI after assimilation [<a href="#B135-agronomy-14-01920" class="html-bibr">135</a>].</p>
Full article ">Figure 8
<p>Crop model framework of wheat LAI assimilation SAFY based on sentinel-2 remote sensing images [<a href="#B74-agronomy-14-01920" class="html-bibr">74</a>].</p>
Full article ">Figure 9
<p>The PF assimilation method based on improved particle degradation consists of four steps: prediction, filtering, resampling, and merging [<a href="#B185-agronomy-14-01920" class="html-bibr">185</a>].</p>
Full article ">Figure 10
<p>Diagrams of Metropolis–Hastings MCMC and PF. Light gray shows the outline of the target distribution (rear). The circle represents the combination of parameters in the algorithm. (<b>a</b>) the Metropolis–Hastings MCMC sampler proposes a new candidate value based on the last sampling value, and then accepts (green) or rejects (red) according to the ratio of the likelihood approximation of the reference point. (<b>b</b>) the sequential Monte Carlo sampler weights the point-by-point likelihood values from the initial set of parameter values and selects new candidates from the current set based on weight [<a href="#B191-agronomy-14-01920" class="html-bibr">191</a>].</p>
Full article ">Figure 11
<p>Variation and comparison of spatial, temporal, and spectral resolution of different satellite images [<a href="#B70-agronomy-14-01920" class="html-bibr">70</a>].</p>
Full article ">
20 pages, 12334 KiB  
Article
Derivation and Evaluation of LAI from the ICESat-2 Data over the NEON Sites: The Impact of Segment Size and Beam Type
by Yao Wang and Hongliang Fang
Remote Sens. 2024, 16(16), 3078; https://doi.org/10.3390/rs16163078 - 21 Aug 2024
Cited by 2 | Viewed by 1010
Abstract
The leaf area index (LAI) is a critical variable for forest ecosystem processes. Passive optical and active LiDAR remote sensing have been used to retrieve LAI. LiDAR data have good penetration to provide vertical structure distribution and deliver the ability to estimate forest [...] Read more.
The leaf area index (LAI) is a critical variable for forest ecosystem processes. Passive optical and active LiDAR remote sensing have been used to retrieve LAI. LiDAR data have good penetration to provide vertical structure distribution and deliver the ability to estimate forest LAI, such as the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2). Segment size and beam type are important for ICESat-2 LAI estimation, as they affect the amount of signal photons returned. However, the current ICESat-2 LAI estimation only covered a limited number of sites, and the performance of LAI estimation with different segment sizes has not been clearly compared. Moreover, ICESat-2 LAIs derived from strong and weak beams lack a comparative analysis. This study derived and evaluated LAI from ICESat-2 data over the National Ecological Observatory Network (NEON) sites in North America. The LAI estimated from ICESat-2 for different segment sizes (20, 100, and 200 m) and beam types (strong beam and weak beam) were compared with those from the airborne laser scanning (ALS) and the Copernicus Global Land Service (CGLS). The results show that the LAI derived from strong beams performs better than that of weak beams because more photon signals are received. The LAI estimated from the strong beam at the 200 m segment size shows the highest consistency with those from the ALS data (R = 0.67). Weak beams also present the potential to estimate LAI and have moderate agreement with ALS (R = 0.52). The ICESat-2 LAI shows moderate consistency with ALS for most forest types, except for the evergreen forest. The ICESat-2 LAI shows satisfactory agreement with the CGLS 300 m LAI product (R = 0.67, RMSE = 1.94) and presents a higher upper boundary. Overall, the ICESat-2 can characterize canopy structural parameters and provides the ability to estimate LAI, which may promote the LAI product generated from the photon-counting LiDAR. Full article
(This article belongs to the Special Issue LiDAR Remote Sensing for Forest Mapping)
Show Figures

Figure 1

Figure 1
<p>The spatial distribution of (<b>a</b>) ICESat-2 data over the 12 National Ecological Observatory Network (NEON) sites and (<b>b</b>) an example of ICESat-2 data at the BART site. The background is a land cover map from the National Land Cover Database (NLCD).</p>
Full article ">Figure 2
<p>Statistics of point density along track distance for (<b>a</b>) ICESat-2 strong beam, (<b>b</b>) ICESat-2 weak beam, and (<b>c</b>) ALS data for different segment sizes (20 m, 100 m, and 200 m). DF, EF, MF, and WET refer to deciduous forest, evergreen forest, mixed forest, and woody wetlands, respectively.</p>
Full article ">Figure 3
<p>Comparison between the ICESat-2 LAI and the ALS LAI for different segment sizes and beam types. The upper (<b>a</b>–<b>c</b>), middle (<b>d</b>–<b>f</b>), and lower (<b>g</b>–<b>i</b>) panels correspond to the all, strong, and weak beams at segment sizes of 20 m, 100 m, and 200 m, respectively. The solid line and dashed line indicate the fitting line and 1:1 line, respectively.</p>
Full article ">Figure 4
<p>Comparison between the LAI values derived from strong-beam ICESat-2 and ALS for DF, EF, MF, and WET. The upper (<b>a</b>–<b>d</b>), middle (<b>e</b>–<b>h</b>), and lower (<b>i</b>–<b>l</b>) panels correspond to the different land cover types at segment sizes of 20 m, 100 m, and 200 m, respectively. The solid line and dashed line indicate the fitting line and 1:1 line, respectively.</p>
Full article ">Figure 5
<p>The correlation between ICESat-2 LAI and ALS LAI of each NEON site for different segment sizes and beam types. See <a href="#remotesensing-16-03078-t001" class="html-table">Table 1</a> and <a href="#remotesensing-16-03078-f002" class="html-fig">Figure 2</a> for the site names and land cover types, respectively.</p>
Full article ">Figure 6
<p>Comparison between the ICESat-2 LAI from all beams, strong beams, and weak beams and the CGLS LAI. The upper (<b>a</b>–<b>c</b>), middle (<b>d</b>–<b>f</b>), and lower (<b>g</b>–<b>i</b>) panels correspond to the all, strong, and weak beams at segment sizes of 20 m, 100 m, and 200 m, respectively. The solid line and dashed line indicate the fitting line and 1:1 line, respectively.</p>
Full article ">Figure 7
<p>Comparison between the LAI derived from all-beam ICESat-2 and CGLS for DF, EF, MF, and WET. The upper (<b>a</b>–<b>d</b>), middle (<b>e</b>–<b>h</b>), and lower (<b>i</b>–<b>l</b>) panels correspond to the different land cover types at segment sizes of 20 m, 100 m, and 200 m, respectively. The solid line and dashed line indicate the fitting line and 1:1 line, respectively.</p>
Full article ">Figure A1
<p>The variation in LAI bias at different <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>v</mi> </msub> </mrow> </semantics></math>/<math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>g</mi> </msub> </mrow> </semantics></math> values. The black dashed line represents the 1/3 value used in this study.</p>
Full article ">Figure A2
<p>The example profile of ICESat-2 photons along track distance (ATD) for DF (<b>a</b>,<b>b</b>), EF (<b>c</b>,<b>d</b>), MF (<b>e</b>,<b>f</b>), and WET (<b>g</b>,<b>h</b>) types. The left and right panels correspond to strong and weak beams, respectively. The classified photons are from ATL08 data products. The top of the canopy, canopy photons, and ground photons are marked as light-green, forest-green dots, and orange dots, respectively. DF, EF, MF, and WET refer to deciduous forest, evergreen forest, mixed forest, and woody wetlands, respectively.</p>
Full article ">Figure A3
<p>The distribution and seasonal variation in field LAI for overall, overstory, and understory at typical NEON sites. The ratio is understory LAI divided by overall LAI.</p>
Full article ">Figure A4
<p>The ATL08 photon classification (left panel) and composed ATL08 and manual photon classification (right panel) at four example sites. (<b>a</b>,<b>e</b>) SERC site, (<b>b</b>,<b>f</b>) DELA site, (<b>c</b>,<b>g</b>) BART site, and (<b>d</b>,<b>h</b>) DSNY site.</p>
Full article ">
23 pages, 11067 KiB  
Article
A Down-Scaling Inversion Strategy for Retrieving Canopy Water Content from Satellite Hyperspectral Imagery
by Meihong Fang, Xiangyan Hu, Jing M. Chen, Xueshiyi Zhao, Xuguang Tang, Haijian Liu, Mingzhu Xu and Weimin Ju
Forests 2024, 15(8), 1463; https://doi.org/10.3390/f15081463 - 20 Aug 2024
Viewed by 915
Abstract
Vegetation canopy water content (CWC) crucially affects stomatal conductance and photosynthesis and, consequently, is a key state variable in advanced ecosystem models. Remote sensing has been shown to be an effective tool for retrieving CWCs. However, the retrieval of the CWC from satellite [...] Read more.
Vegetation canopy water content (CWC) crucially affects stomatal conductance and photosynthesis and, consequently, is a key state variable in advanced ecosystem models. Remote sensing has been shown to be an effective tool for retrieving CWCs. However, the retrieval of the CWC from satellite remote sensing data is affected by the vegetation canopy structure and soil background. This study proposes a methodology that combines a modified spectral down-scaling model with a high-universality leaf water content inversion model to retrieve the CWC through constraining the impacts of canopy structure and soil background on CWC retrieval. First, canopy spectra acquired by satellite sensors were down-scaled to leaf reflectance spectra according to the probabilities of viewing the sunlit foliage (PT) and background (PG) and the estimated spectral multiple scattering factor (M). Then, leaf water content, or equivalent water thickness (EWT), was obtained from the down-scaled leaf reflectance spectra via a leaf-scale EWT inversion model calibrated with PROSPECT simulation data. Finally, the CWC was calculated as the product of the estimated leaf EWT and canopy leaf area index. Validation of this coupled model was performed using satellite-ground synchronous observation data across various vegetation types within the study area, affirming the model’s broad applicability. Results indicate that the modified spectral down-scaling model accurately retrieves leaf reflectance spectra, aligning closely with site-level measured spectra. Compared to the direct inversion approach, which performs poorly with Hyperion satellite images, the down-scale strategy notably excels. Specifically, the Similarity Water Index (SWI)-based canopy EWT coupled model achieved the most precise estimation, with a normalized Root Mean Square Error (nRMSE) of 15.28% and an adjusted R2 of 0.77, surpassing the performance of the best index Shortwave Angle Normalized Index (SANI)-based model (nRMSE = 15.61%, adjusted R2 = 0.52). Given its calibration using simulated data, this coupled model proved to be a potent method for extracting canopy EWT from satellite imagery, suggesting its applicability to retrieve other vegetative biochemical components from satellite data. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study area and sampling locations: (<b>a</b>) the study area was located in Menghai county (21.98° N, 100.29° E), Xishuangbanna, Yunnan province in southwest China. (<b>b</b>) The RGB true color image of Hyperion remote sensing data and (<b>c</b>) sampling points for ground synchronous observation.</p>
Full article ">Figure 2
<p>Workflow of the coupled down-scaling inversion strategy retrieving leaf-level and canopy-level water content from satellite data considering canopy structure and background effects.</p>
Full article ">Figure 3
<p>Sensitivity of the simulated probability of viewing sunlit foliage (PT) (<b>a</b>) and sunlit background (PG) (<b>b</b>) to the solar zenith Angle (SZA), LAI and radius of tree crowns. Here, the tree density was set at 4000 trees per hectare and VZA = 0°.</p>
Full article ">Figure 4
<p>Correlations of NSAI with PT (<b>a</b>) and PG (<b>b</b>) for the Hyperion synthetic data under conditions of normal soil (N, orange dots), dry soil (D, green dots), and wet soil (W, blue dots). A total of 13,950,144 Hyperion samples and 10,764 Hyperion scenes are available for analysis. On 33 Hyperion pixels, we compare PT (<b>c</b>) and PG (<b>d</b>) estimates using NSAI-based models to the reference values inverted with the 4-Scale GO model. The red straight lines are the 1:1 line. Correlations of NSAI with PT (<b>a</b>) and comparisons of PT estimated using NSAI with the reference values (<b>c</b>) are adapted from Fang et al. [<a href="#B25-forests-15-01463" class="html-bibr">25</a>].</p>
Full article ">Figure 5
<p>Spatial patterns of LAI (<b>a</b>) derived using MSR<sub>705</sub> and spatial distribution of PT (<b>b</b>) and PG (<b>c</b>) estimated using NASI retrieved from the Hyperion image over the study area at a spatial resolution of 30 m.</p>
Full article ">Figure 6
<p>Correlations between leaf EWT and SWI for the measured data (<b>a</b>) and the simulated data using the PROSPECT model (<b>b</b>). Validation of leaf EWT retrieved using the SWI-based model derived from measured data (<b>c</b>) and the simulated data (<b>d</b>). All leaf reflectance spectra were resampled to Hyperion-equivalent spectra.</p>
Full article ">Figure 7
<p>Spatial distribution of average leaf EWT inverted from coupled down-scaling inversion strategy (<b>a</b>) and canopy water content per unit ground surface area derived based on the LAI image and retrieved average leaf EWT (<b>b</b>). The spatial resolution of the image is 30 × 30 m.</p>
Full article ">Figure 8
<p>Comparison of measured CWC against those retrieved from SWI-based (<b>a</b>) and SANI-based (<b>b</b>) coupled models using the down-scaling inversion strategy.</p>
Full article ">Figure A1
<p>After preprocessing of the Hyperion image, spectra were compared between (<b>a</b>) vegetation with different crown closures (high, moderate, and low coverage) and (<b>b</b>) soil background with red and gray hues. Data were adapted from Fang et al. [<a href="#B25-forests-15-01463" class="html-bibr">25</a>].</p>
Full article ">Figure A2
<p>Correlations of the viewing probabilities of sunlit crown and background (PT and PG) based on the 4-Scale GO model simulations from the Hyperion synthetic data, which includes 10,764 scenes.</p>
Full article ">
23 pages, 9448 KiB  
Article
Monitoring Biophysical Variables (FVC, LAI, LCab, and CWC) and Cropland Dynamics at Field Scale Using Sentinel-2 Time Series
by Reza Hassanpour, Abolfazl Majnooni-Heris, Ahmad Fakheri Fard and Jochem Verrelst
Remote Sens. 2024, 16(13), 2284; https://doi.org/10.3390/rs16132284 - 22 Jun 2024
Cited by 3 | Viewed by 1608
Abstract
Biophysical variables play a crucial role in understanding phenological stages and crop dynamics, optimizing ultimate agricultural practices, and achieving sustainable crop yields. This study examined the effectiveness of the Sentinel-2 Biophysical Processor (S2BP) in accurately estimating crop dynamics descriptors, including fractional vegetation cover [...] Read more.
Biophysical variables play a crucial role in understanding phenological stages and crop dynamics, optimizing ultimate agricultural practices, and achieving sustainable crop yields. This study examined the effectiveness of the Sentinel-2 Biophysical Processor (S2BP) in accurately estimating crop dynamics descriptors, including fractional vegetation cover (FVC), leaf area index (LAI), leaf chlorophyll a and b (LCab), and canopy water content (CWC). The evaluation was conducted using estimation quality indicators (EQIs) and comprehensive ground throughout the entire growing season at the field scale. To identify soil and vegetation pixels, the spectral unmixing technique was employed. According to the EQIs, the best retrievals were obtained for FVC in around 99.9% of the 23,976 pixels that were analyzed during the growth season. For LAI, LCab, and CWC, over 60% of the examined pixels had inputs that were out-of-range. Furthermore, in over 35% of the pixels, the output values for LCab and CWC were out-of-range. The FVC, LAI, and LCab estimates agreed well with ground measurements (R2 = 0.62–0.85), whereas a discrepancy was observed for CWC estimates when compared with ground measurements (R2 = 0.51). Furthermore, the uncertainties of FVC, LAI, LCab, and CWC estimates were 0.09, 0.81 m2/m2, 60.85 µg/cm2, and 0.02 g/cm2 through comparisons to ground FVC, LAI, Cab, and CWC measurements, respectively. Considering EQIs and uncertainty metrics, the order of the estimation accuracy of the four variables was FVC > LAI > LCab > CWC. Our analysis revealed that temporal variations of FVC, LAI, and LCab were primarily driven by field-scale events like sowing date, growing period, and harvesting time, highlighting their sensitivity to agricultural practices. The robustness of S2BP results could be enhanced by implementing a pixel identification algorithm, like embedding spectral unmixing. Overall, this study provides detailed, pixel-by-pixel insights into the performance of S2BP in estimating FVC, LAI, LCab, and CWC, which are crucial for monitoring crop dynamics in precision agriculture. Full article
(This article belongs to the Collection Sentinel-2: Science and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical location of study area.</p>
Full article ">Figure 2
<p>Validation of the Land European Remote Sensing Instruments (VALERI) sampling approach for each elementary sampling unit (ESU). A, B, C, D and E represent the measurement or sampling points within each ESU.</p>
Full article ">Figure 3
<p>The framework of the biophysical variables retrieval algorithm using Sentinel-2 imagery. In the PROSPECT model, N, C<sub>ab</sub>, C<sub>w</sub>, C<sub>m</sub>, and C<sub>bp</sub> represent the mesophyll structure index, chlorophyll content (µg/cm<sup>2</sup>), dry matter content (g/cm<sup>2</sup>), water content (g/cm<sup>2</sup>), and brown pigment content for leaf, respectively. In the SAIL model, LAI, ALA, h<sub>spot</sub>, ρ<sub>soil</sub>, θ<sub>s</sub>, θ<sub>v</sub>, and s<sub>v</sub> correspond to leaf area index (m<sup>2</sup>/m<sup>2</sup>), average leaf angle (◦), hot spot parameter, soil reflectance, solar zenith angle (◦), view zenith angle (◦), and the relative azimuth angle between solar and view (◦), respectively [<a href="#B45-remotesensing-16-02284" class="html-bibr">45</a>].</p>
Full article ">Figure 4
<p>Flowchart illustrating the algorithm of estimation quality indicators (EQIs) for fractional vegetation cover (FVC), leaf area index (LAI), leaf chlorophyll a and b (LC<sub>ab</sub>), and canopy water content (CWC). The TOC, B, and T denote top of canopy reflectance, Sentinel-2 bands, and output range tolerance, respectively.</p>
Full article ">Figure 5
<p>Frequency of soil and vegetation pixels during the growing season.</p>
Full article ">Figure 6
<p>Overall estimation quality for fractional vegetation cover (<b>a</b>), leaf area index (<b>b</b>), leaf chlorophyll a and b (<b>c</b>), and canopy water content (<b>d</b>) at all 23,976 pixels during the growing season. 0: best retrieval, 1: input out-of-range, &gt;1: output out-of-range.</p>
Full article ">Figure 7
<p>Frequency of estimation quality indicators (EQIs) for fractional vegetation cover (<b>a</b>), leaf area index (<b>b</b>), chlorophyll (<b>a</b>,<b>b</b>) concentration (<b>c</b>), and canopy water content (<b>d</b>) during the growing season.</p>
Full article ">Figure 8
<p>Scatter plot and overall linear regression function between in-situ measurements and estimates for (<b>a</b>) fractional vegetation cover (FVC), (<b>b</b>) leaf area index (LAI), (<b>c</b>) leaf chlorophyll a and b (LC<sub>ab</sub>), and (<b>d</b>) canopy water content (CWC); (N = 120).</p>
Full article ">Figure 9
<p>The temporal variation of S2BP-derived (<b>a</b>) fractional vegetation cover (FVC), (<b>b</b>) leaf area index (LAI), (<b>c</b>) leaf chlorophyll (<b>a</b>,<b>b</b>) (LC<sub>ab</sub>) concentration, and (<b>d</b>) canopy water content (CWC) in the corn field during the growing season; the values presented in each date are the average of 30 ESUs.</p>
Full article ">Figure 10
<p>The spatial variations of (<b>a</b>) fractional vegetation cover (FVC), (<b>b</b>) leaf area index (LAI), (<b>c</b>) leaf chlorophyll (<b>a</b>,<b>b</b>) (LC<sub>ab</sub>), and (<b>d</b>) canopy water content (CWC) at different dates of the corn growing season.</p>
Full article ">Figure 11
<p>The temporal variation of normalized difference vegetation index (NDVI), fractional vegetation cover (FVC), and crop dynamics.</p>
Full article ">Figure A1
<p>The spatial variations of estimation quality indicators (EQI) for (<b>a</b>) fractional vegetation cover (FVC, (<b>b</b>) leaf area index (LAI), (<b>c</b>) leaf chlorophyll a and b (LC<sub>ab</sub>), and (<b>d</b>) canopy water content (CWC) at different dates of the corn growing season.</p>
Full article ">
19 pages, 1949 KiB  
Article
An Angle Effect Correction Method for High-Resolution Satellite Side-View Imaging Data to Improve Crop Monitoring Accuracy
by Jialong Gong, Xing Zhong, Ruifei Zhu, Zhaoxin Xu, Dong Wang and Jian Yin
Remote Sens. 2024, 16(12), 2172; https://doi.org/10.3390/rs16122172 - 15 Jun 2024
Viewed by 1297
Abstract
In recent years, the advancement of CubeSat technology has led to the emergence of high-resolution, flexible imaging satellites as a pivotal source of information for the efficient and precise monitoring of crops. However, the dynamic geometry inherent in flexible side-view imaging poses challenges [...] Read more.
In recent years, the advancement of CubeSat technology has led to the emergence of high-resolution, flexible imaging satellites as a pivotal source of information for the efficient and precise monitoring of crops. However, the dynamic geometry inherent in flexible side-view imaging poses challenges in acquiring the high-precision reflectance data necessary to accurately retrieve crop parameters. This study aimed to develop an angular correction method designed to generate nadir reflectance from high-resolution satellite side-swing imaging data. The method utilized the Anisotropic Flat Index (AFX) in conjunction with a fixed set of Bidirectional Reflectance Distribution Function (BRDF) parameters to compute the nadir reflectance for the Jilin-1 GP01/02 multispectral imager (PMS). Crop parameter retrieval was executed using regression models based on vegetation indices, the leaf area index (LAI), fractional vegetation cover (FVC), and chlorophyll (T850 nm/T720 nm) values estimated based on angle corrected reflectance compared with field measurements taken in the Inner Mongolia Autonomous Region. The findings demonstrate that the proposed angular correction method significantly enhances the retrieval accuracy of the LAI, FVC, and chlorophyll from Jilin-1 GP01/02 PMS data. Notably, the retrieval accuracy for the LAI and FVC improved by over 25%. We expect that this approach will exhibit considerable potential to improve crop monitoring accuracy from high-resolution satellite side-view imaging data. Full article
(This article belongs to the Special Issue Crops and Vegetation Monitoring with Remote/Proximal Sensing II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The distribution map of the study area and the field survey points.</p>
Full article ">Figure 2
<p>Spectral response functions of Jilin-1 GP01/02 PMS and MODIS in visible to near-infrared bands.</p>
Full article ">Figure 3
<p>Spectral response functions of Jilin-1 GP01/02 PMS and Landsat TM in visible to near-infrared bands.</p>
Full article ">Figure 4
<p>VIs-based LAI, FVC, and chlorophyll (T850 nm/T720 nm) estimations. The <span class="html-italic">X</span>-axis is the vegetation index, the left <span class="html-italic">Y</span>-axis is the R<sup>2</sup> value, and the right <span class="html-italic">Y</span>-axis is the RMSE value. (<b>a</b>) LAI, (<b>b</b>) FVC, and (<b>c</b>) chlorophyll (T850 nm/T720 nm).</p>
Full article ">Figure 5
<p>Green band reflectance validation based on field measured data. The <span class="html-italic">X</span>-axis is the measured reflectance, the <span class="html-italic">Y</span>-axis is the corrected reflectance, the black solid line is the 1:1 line, and the different color scatter points represent the different observation geometry. (<b>a</b>) Original, (<b>b</b>) MODIS-based, (<b>c</b>) AFX, (<b>d</b>) VJB, (<b>e</b>) Fixed-Landsat, And (<b>f</b>) Fixed-Jilin-1.</p>
Full article ">Figure 6
<p>Red band reflectance validation based on field measured data. The <span class="html-italic">X</span>-axis is the measured reflectance, the <span class="html-italic">Y</span>-axis is the corrected reflectance, the black solid line is the 1:1 line, and the different color scatter points represent the different observation geometry. (<b>a</b>) Original, (<b>b</b>) MODIS-based, (<b>c</b>) AFX, (<b>d</b>) VJB, (<b>e</b>) Fixed-Landsat, And (<b>f</b>) Fixed-Jilin-1.</p>
Full article ">Figure 7
<p>NIR band reflectance validation based on field measured data. The <span class="html-italic">X</span>-axis is the measured reflectance, the <span class="html-italic">Y</span>-axis is the corrected reflectance, the black solid line is the 1:1 line, and the different color scatter points represent the different observation geometry. (<b>a</b>) Original, (<b>b</b>) MODIS-based, (<b>c</b>) AFX, (<b>d</b>) VJB, (<b>e</b>) Fixed-Landsat, And (<b>f</b>) Fixed-Jilin-1.</p>
Full article ">Figure 8
<p>LAI validation based on field measured data. The <span class="html-italic">X</span>-axis is the measured reflectance, the <span class="html-italic">Y</span>-axis is the corrected reflectance, and the black solid line is the 1:1 line. (<b>a</b>) Original, (<b>b</b>) MODIS-based, (<b>c</b>) AFX, (<b>d</b>) VJB, (<b>e</b>) Fixed-Landsat, and (<b>f</b>) Fixed-Jilin-1.</p>
Full article ">Figure 9
<p>FVC validation based on field-measured data. The <span class="html-italic">X</span>-axis is the measured reflectance, the <span class="html-italic">Y</span>-axis is the corrected reflectance, and the black solid line is the 1:1 line. (<b>a</b>) Original, (<b>b</b>) MODIS-based, (<b>c</b>) AFX, (<b>d</b>) VJB, (<b>e</b>) Fixed-Landsat, and (<b>f</b>) Fixed-Jilin-1.</p>
Full article ">Figure 10
<p>Chlorophyll (T850 nm/T720 nm) validation based on field measured data. The <span class="html-italic">X</span>-axis is the measured reflectance, the <span class="html-italic">Y</span>-axis is the corrected reflectance, and the black solid line is the 1:1 line. (<b>a</b>) Original, (<b>b</b>) MODIS-based, (<b>c</b>) AFX, (<b>d</b>) VJB, (<b>e</b>) Fixed-Landsat, and (<b>f</b>) Fixed-Jilin-1.</p>
Full article ">
19 pages, 3227 KiB  
Article
Hyperspectral Leaf Area Index and Chlorophyll Retrieval over Forest and Row-Structured Vineyard Canopies
by Luke A. Brown, Harry Morris, Andrew MacLachlan, Francesco D’Adamo, Jennifer Adams, Ernesto Lopez-Baeza, Erika Albero, Beatriz Martínez, Sergio Sánchez-Ruiz, Manuel Campos-Taberner, Antonio Lidón, Cristina Lull, Inmaculada Bautista, Daniel Clewley, Gary Llewellyn, Qiaoyun Xie, Fernando Camacho, Julio Pastor-Guzman, Rosalinda Morrone, Morven Sinclair, Owen Williams, Merryn Hunt, Andreas Hueni, Valentina Boccia, Steffen Dransfeld and Jadunandan Dashadd Show full author list remove Hide full author list
Remote Sens. 2024, 16(12), 2066; https://doi.org/10.3390/rs16122066 - 7 Jun 2024
Viewed by 2102
Abstract
As an unprecedented stream of decametric hyperspectral observations becomes available from recent and upcoming spaceborne missions, effective algorithms are required to retrieve vegetation biophysical and biochemical variables such as leaf area index (LAI) and canopy chlorophyll content (CCC). In the context of missions [...] Read more.
As an unprecedented stream of decametric hyperspectral observations becomes available from recent and upcoming spaceborne missions, effective algorithms are required to retrieve vegetation biophysical and biochemical variables such as leaf area index (LAI) and canopy chlorophyll content (CCC). In the context of missions such as the Environmental Mapping and Analysis Program (EnMAP), Precursore Iperspettrale della Missione Applicativa (PRISMA), Copernicus Hyperspectral Imaging Mission for the Environment (CHIME), and Surface Biology Geology (SBG), several retrieval algorithms have been developed based upon the turbid medium Scattering by Arbitrarily Inclined Leaves (SAIL) radiative transfer model. Whilst well suited to cereal crops, SAIL is known to perform comparatively poorly over more heterogeneous canopies (including forests and row-structured crops). In this paper, we investigate the application of hybrid radiative transfer models, including a modified version of SAIL (rowSAIL) and the Invertible Forest Reflectance Model (INFORM), to such canopies. Unlike SAIL, which assumes a horizontally homogeneous canopy, such models partition the canopy into geometric objects, which are themselves treated as turbid media. By enabling crown transmittance, foliage clumping, and shadowing to be represented, they provide a more realistic representation of heterogeneous vegetation. Using airborne hyperspectral data to simulate EnMAP observations over vineyard and deciduous broadleaf forest sites, we demonstrate that SAIL-based algorithms provide moderate retrieval accuracy for LAI (RMSD = 0.92–2.15, NRMSD = 40–67%, bias = −0.64–0.96) and CCC (RMSD = 0.27–1.27 g m−2, NRMSD = 64–84%, bias = −0.17–0.89 g m−2). The use of hybrid radiative transfer models (rowSAIL and INFORM) reduces bias in LAI (RMSD = 0.88–1.64, NRMSD = 27–64%, bias = −0.78–−0.13) and CCC (RMSD = 0.30–0.87 g m−2, NRMSD = 52–73%, bias = 0.03–0.42 g m−2) retrievals. Based on our results, at the canopy level, we recommend that hybrid radiative transfer models such as rowSAIL and INFORM are further adopted for hyperspectral biophysical and biochemical variable retrieval over heterogeneous vegetation. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>True colour composite airborne hyperspectral mosaics collected during the Wytham Woods 2021 (<b>top left</b>), Wytham Woods 2018 (<b>middle left</b>), and Valencia Anchor Station 2017 (<b>bottom left</b>) campaigns, in addition to the location of the elementary sampling units (ESUs) in which in situ LAI and LCC measurements were performed (<b>middle</b>), and the location of each study site (<b>right</b>).</p>
Full article ">Figure 2
<p>True colour composite of airborne hyperspectral data over the in-scene targets (<b>a</b>) for which HCRF was determined using an ASD FieldSpec 3 VNIR spectroradiometer (<b>b</b>) in the Valencia Anchor Station 2017 campaign.</p>
Full article ">Figure 3
<p>Graphical representation of turbid medium (<b>a</b>) and hybrid radiative transfer models representing row-structured crop (<b>b</b>) and forest (<b>c</b>) canopies, in which the canopy is represented by geometric objects that are themselves treated as turbid media. Adapted from [<a href="#B53-remotesensing-16-02066" class="html-bibr">53</a>,<a href="#B54-remotesensing-16-02066" class="html-bibr">54</a>,<a href="#B55-remotesensing-16-02066" class="html-bibr">55</a>].</p>
Full article ">Figure 4
<p>Reflectance of the white tarpaulin (<b>a</b>), grey tarpaulin (<b>b</b>), black tarpaulin (<b>c</b>), and artificial football field (<b>d</b>) between 350–1050 nm in the Valencia Anchor Station 2017 campaign, as determined from the airborne hyperspectral data using ATCOR-4 and FLAASH, and from in situ measurements of HCRF performed using the ASD FieldSpec 3 VNIR spectroradiometer. Note that several bands are excluded in the case of the white tarpaulin due to the saturation of the airborne hyperspectral data.</p>
Full article ">Figure 5
<p>Mean (solid lines) and standard deviation (dashed lines) of observed airborne hyperspectral reflectance spectra over the considered ESUs in each campaign. Note that the excluded wavelengths correspond to noisy spectral regions dominated by water vapour absorption and dropouts (<a href="#sec2dot3-remotesensing-16-02066" class="html-sec">Section 2.3</a>).</p>
Full article ">Figure 6
<p>Validation of SAIL (<b>left</b>) and rowSAIL/INFORM (<b>right</b>) LAI (<b>a,b</b>) and CCC (<b>c,d</b>) retrievals against in situ measurements. The dashed line represents a 1:1 relationship.</p>
Full article ">
Back to TopTop