[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (274)

Search Parameters:
Keywords = BRDF

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 6487 KiB  
Article
Synchronous Atmospheric Correction of Wide-Swath and Wide-Field Remote Sensing Image from HJ-2A/B Satellite
by Honglian Huang, Yuxuan Wang, Xiao Liu, Rufang Ti, Xiaobing Sun, Zhenhai Liu, Xuefeng Lei, Jun Lin and Lanlan Fan
Remote Sens. 2025, 17(5), 884; https://doi.org/10.3390/rs17050884 - 1 Mar 2025
Viewed by 322
Abstract
The Chinese HuanjingJianzai-2 (HJ-2) A/B satellites are equipped with advanced sensors, including a Multispectral Camera (MSC) and a Polarized Scanning Atmospheric Corrector (PSAC). To address the challenges of atmospheric correction (AC) for the MSC’s wide-swath, wide-field images, this study proposes a pixel-by-pixel method [...] Read more.
The Chinese HuanjingJianzai-2 (HJ-2) A/B satellites are equipped with advanced sensors, including a Multispectral Camera (MSC) and a Polarized Scanning Atmospheric Corrector (PSAC). To address the challenges of atmospheric correction (AC) for the MSC’s wide-swath, wide-field images, this study proposes a pixel-by-pixel method incorporating Bidirectional Reflectance Distribution Function (BRDF) effects. The approach uses synchronous atmospheric parameters from the PSAC, an atmospheric correction lookup table, and a semi-empirical BRDF model to produce surface reflectance (SR) products through radiative, adjacency effect, and BRDF corrections. The corrected images showed significant improvements in clarity and contrast compared to pre-correction images, with minimum increases of 55.91% and 35.63%, respectively. Validation experiments in Dunhuang and Hefei, China, demonstrated high consistency between the corrected SR and ground-truth data, with maximum deviations below 0.03. For surface types not covered by ground measurements, comparisons with Sentinel-2 SR products yielded maximum deviations below 0.04. These results highlight the effectiveness of the proposed method in improving image quality and accuracy, providing reliable data support for applications such as disaster monitoring, water resource management, and crop monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of synchronized detection between PSAC and MSC.</p>
Full article ">Figure 2
<p>Atmospheric correction flowchart for wide-swath and wide-field multispectral images.</p>
Full article ">Figure 3
<p>Matching results of AOD and CWV for MSC image. (<b>a</b>) Matched AOD distribution. (<b>b</b>) AOD distribution after linear interpolation. (<b>c</b>) Matched CWV distribution. (<b>d</b>) CWV distribution after linear interpolation.</p>
Full article ">Figure 4
<p>Comparison of pre- and post-atmospheric correction for HJ-2A satellite multispectral image of Beijing Daxing Airport, China. The red-marked and green-marked areas represent the selected regions for comparison and validation with Sentinel-2 data, as described in <a href="#sec4dot3-remotesensing-17-00884" class="html-sec">Section 4.3</a>. (CCD1, 14 November 2022; AOD = 0.446; CWV = 0.51 g/cm<sup>2</sup>). (<b>a</b>) Before atmospheric correction. (<b>b</b>) After atmospheric correction.</p>
Full article ">Figure 5
<p>Comparison of pre- and post-atmospheric correction for an HJ-2B satellite multispectral image of the Indian Plains region. The red-marked and green-marked areas represent the selected regions for comparison and validation with Sentinel-2 data, as described in <a href="#sec4dot3-remotesensing-17-00884" class="html-sec">Section 4.3</a>. (CCD3, 25 November 2022; AOD = 0.208; CWV = 0.96 g/cm<sup>2</sup>). (<b>a</b>) Before atmospheric correction. (<b>b</b>) After atmospheric correction.</p>
Full article ">Figure 6
<p>Comparison of pre- and post-atmospheric correction for HJ-2A satellite multispectral image of Xianning City, Hubei Province, China. The red-marked and green-marked areas represent the selected regions for comparison and validation with Sentinel-2 data, as described in <a href="#sec4dot3-remotesensing-17-00884" class="html-sec">Section 4.3</a>. (CCD3, 23 December 2022; AOD = 0.564; CWV = 0.45 g/cm<sup>2</sup>). (<b>a</b>) Before atmospheric correction. (<b>b</b>) After atmospheric correction.</p>
Full article ">Figure 7
<p>The contrast, clarity, and their improvements of the multispectral images of Daxing Airport, Beijing, China, before and after atmospheric correction from the HJ-2A satellite. (<b>a</b>) Contrast. (<b>b</b>) Clarity.</p>
Full article ">Figure 8
<p>The contrast, clarity, and their improvements of the multispectral images of the Indian Plains region, before and after atmospheric correction from the HJ-2B satellite. (<b>a</b>) Contrast. (<b>b</b>) Clarity.</p>
Full article ">Figure 9
<p>The contrast, clarity, and their improvements of the multispectral images of Xian Ning, Hubei Province, China, before and after atmospheric correction from the HJ-2A satellite. (<b>a</b>) Contrast. (<b>b</b>) Clarity.</p>
Full article ">Figure 10
<p>Comparison of pre- and post-atmospheric correction for an HJ-2B satellite multispectral image at the Dunhuang site in China. The red-marked area represents the ground measurement region at the Dunhuang site. (<b>a</b>) Before atmospheric correction. (<b>b</b>) After atmospheric correction.</p>
Full article ">Figure 11
<p>Comparison of pre- and post-atmospheric correction for HJ-2B satellite multispectral image at Northern high-reflectance site in Dunhuang, China. The red-marked area represents the ground measurement region at the high-reflectance site. (<b>a</b>) Before atmospheric correction. (<b>b</b>) After atmospheric correction.</p>
Full article ">Figure 12
<p>Comparison of pre- and post-atmospheric correction for an HJ-2A satellite multispectral image at the suburban area of Hefei, Anhui Province, China. The red-marked and blue-marked areas represent the ground measurement regions for the wheat field and river water, respectively. (<b>a</b>) Before atmospheric correction. (<b>b</b>) After atmospheric correction.</p>
Full article ">Figure 13
<p>The reflectance curve from the ground-based synchronized measurements. (<b>a</b>) Dunhuang, Gansu, China. (<b>b</b>) Hefei, Anhui, China.</p>
Full article ">Figure 14
<p>Comparison chart of ground-measured reflectance and atmospheric-corrected SR. (<b>a</b>) Dunhuang site (25 January 2021, HJ-2B). (<b>b</b>) High-reflectance site (25 January 2021, HJ-2B). (<b>c</b>) Wheat field (25 March 2021, HJ-2A). (<b>d</b>) River water (25 March 2021, HJ-2A).</p>
Full article ">
21 pages, 27582 KiB  
Article
Multi-Level Spectral Attention Network for Hyperspectral BRDF Reconstruction from Multi-Angle Multi-Spectral Images
by Liyao Song and Haiwei Li
Remote Sens. 2025, 17(5), 863; https://doi.org/10.3390/rs17050863 - 28 Feb 2025
Viewed by 165
Abstract
With the rapid development of hyperspectral applications using unmanned aerial vehicles (UAVs), the traditional assumption that ground objects exhibit Lambertian reflectance is no longer sufficient to meet the high-precision requirements for quantitative inversion and airborne hyperspectral data applications. Therefore, it is necessary to [...] Read more.
With the rapid development of hyperspectral applications using unmanned aerial vehicles (UAVs), the traditional assumption that ground objects exhibit Lambertian reflectance is no longer sufficient to meet the high-precision requirements for quantitative inversion and airborne hyperspectral data applications. Therefore, it is necessary to establish a hyperspectral bidirectional reflectance distribution function (BRDF) model suitable for the area of imaging. However, obtaining multi-angle information from UAV push-broom hyperspectral data is difficult. Achieving uniform push-broom imaging and flexibly acquiring multi-angle data is challenging due to spatial distortions, particularly under heightened roll or pitch angles, and the need for multiple flights; this extends acquisition time and exacerbates uneven illumination, introducing errors in BRDF model construction. To address these issues, we propose leveraging the advantages of multi-spectral cameras, such as their compact size, lightweight design, and high signal-to-noise ratio (SNR) to reconstruct hyperspectral multi-angle data. This approach enhances spectral resolution and the number of bands while mitigating spatial distortions and effectively captures the multi-angle characteristics of ground objects. In this study, we collected UAV hyperspectral multi-angle data, corresponding illumination information, and atmospheric parameter data, which can solve the problem of existing BRDF modeling not considering outdoor ambient illumination changes, as this limits modeling accuracy. Based on this dataset, we propose an improved Walthall model, considering illumination variation. Then, the radiance consistency of BRDF multi-angle data is effectively optimized, the error caused by illumination variation in BRDF modeling is reduced, and the accuracy of BRDF modeling is improved. In addition, we adopted Transformer for spectral reconstruction, increased the number of bands on the basis of spectral dimension enhancement, and conducted BRDF modeling based on the spectral reconstruction results. For the multi-level Transformer spectral dimension enhancement algorithm, we added spectral response loss constraints to improve BRDF accuracy. In order to evaluate BRDF modeling and quantitative application potential from the reconstruction results, we conducted comparison and ablation experiments. Finally, we solved the problem of difficulty in obtaining multi-angle information due to the limitation of hyperspectral imaging equipment, and we provide a new solution for obtaining multi-angle features of objects with higher spectral resolution using low-cost imaging equipment. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Multi-level BRDF spectral reconstruction network. (<b>a</b>) Single-level spectral transformer module SST; (<b>b</b>) Multi-level spectral reconstruction network.</p>
Full article ">Figure 2
<p>The structure of each component in the SST module. (<b>a</b>) Spectral Multi-Head Attention Module S-MSA; (<b>b</b>) Dual RsFFN; (<b>c</b>) Spectral Attention Module SAB.</p>
Full article ">Figure 3
<p>UAV nested multi-rectangular flight routes.</p>
Full article ">Figure 4
<p>Changes in aerosol and water vapor content on the day of the experiment (<b>left column</b>: the first day; <b>right column</b>: the second day).</p>
Full article ">Figure 5
<p>Schematic diagram of observation angles at the moment of UAV imaging.</p>
Full article ">Figure 6
<p>Processing flow of multi-angle data.</p>
Full article ">Figure 7
<p>Comparison of true color results of BRDF data reconstructed using different methods at different observation zenith angles.</p>
Full article ">Figure 8
<p>Comparison of mean spectral curves and error curves reconstructed by different BRDF methods at different angles.</p>
Full article ">Figure 9
<p>Error heat map comparison of the reconstruction results of different reconstruction methods in the 5-<math display="inline"><semantics> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </semantics></math> and 15-<math display="inline"><semantics> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </semantics></math> bands.</p>
Full article ">Figure 10
<p>Comparison of Walthall model data distribution with/without considering the illumination variation.</p>
Full article ">Figure 11
<p>Hyperspectral BRDF modeling with illumination correction Walthall model.</p>
Full article ">Figure 12
<p>Multi-angle spectral reconstruction BRDF modeling considering the illumination-corrected Walthall model.</p>
Full article ">Figure 13
<p>Error analysis comparison between spectral reconstruction BRDF model and hyperspectral BRDF model.</p>
Full article ">
37 pages, 7441 KiB  
Review
Practical Guidelines for Performing UAV Mapping Flights with Snapshot Sensors
by Wouter H. Maes
Remote Sens. 2025, 17(4), 606; https://doi.org/10.3390/rs17040606 - 10 Feb 2025
Viewed by 924
Abstract
Uncrewed aerial vehicles (UAVs) have transformed remote sensing, offering unparalleled flexibility and spatial resolution across diverse applications. Many of these applications rely on mapping flights using snapshot imaging sensors for creating 3D models of the area or for generating orthomosaics from RGB, multispectral, [...] Read more.
Uncrewed aerial vehicles (UAVs) have transformed remote sensing, offering unparalleled flexibility and spatial resolution across diverse applications. Many of these applications rely on mapping flights using snapshot imaging sensors for creating 3D models of the area or for generating orthomosaics from RGB, multispectral, hyperspectral, or thermal cameras. Based on a literature review, this paper provides comprehensive guidelines and best practices for executing such mapping flights. It addresses critical aspects of flight preparation and flight execution. Key considerations in flight preparation covered include sensor selection, flight height and GSD, flight speed, overlap settings, flight pattern, direction, and viewing angle; considerations in flight execution include on-site preparations (GCPs, camera settings, sensor calibration, and reference targets) as well as on-site conditions (weather conditions, time of the flights) to take into account. In all these steps, high-resolution and high-quality data acquisition needs to be balanced with feasibility constraints such as flight time, data volume, and post-flight processing time. For reflectance and thermal measurements, BRDF issues also influence the correct setting. The formulated guidelines are based on literature consensus. However, the paper also identifies knowledge gaps for mapping flight settings, particularly in viewing angle pattern, flight direction, and thermal imaging in general. The guidelines aim to advance the harmonization of UAV mapping practices, promoting reproducibility and enhanced data quality across diverse applications. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the UAV mapping process. This review focuses on the areas in bold and green.</p>
Full article ">Figure 2
<p>Schematic overview of the solar and sensor viewing angles.</p>
Full article ">Figure 3
<p>BRDF influence on spectral reflectance. (<b>a</b>) Images obtained with a UAV from a meadow from different sensor zenith and azimuth angles (Canon S110 camera, on a Vulcan hexacopter with an AV200 gimbal (PhotoHigher, Wellington, New Zealand), obtained on 28 July 2015 over a meadow near Richmond, NSW, Australia (lat: 33.611° S, lon: 150.732° E)). (<b>b</b>) Empirical BRDF in the green wavelength over a tropical forest (Robson Creek, Queensland, Australia (lat: 17.118° S, lon: 145.630° E), obtained with the same UAV and camera on 16 August 2015, from [<a href="#B21-remotesensing-17-00606" class="html-bibr">21</a>], (<b>c</b>–<b>e</b>) Simulations of reflectance in the red (<b>c</b>) and near infrared (<b>d</b>) spectrum and for NDVI (<b>e</b>) (SCOPE; for a vegetation of 1 m height, LAI of 2, Chlorophyll content of 40 μg/cm<sup>2</sup> and fixed solar zenith angle of 30°).</p>
Full article ">Figure 4
<p>General workflow for the flight planning with an indication of the most important considerations in each step.</p>
Full article ">Figure 5
<p>The effect of ground sampling distance (GSD) on the image quality, in this case for weed detection in a corn field. Image taken on 14/07/2022 in Bottelare, Belgium (lat: 50.959° N, lon: 3.767° E), with a Sony α7R IV camera, equipped with an 85 mm lens flying at 18 m altitude on a DJI M600 Pro UAV. Here, a small section of the orthomosaic, created in Agisoft Metashape, is shown. The original GSD was 0.85 mm, which was downscaled and exported at different GSD using Agisoft Metashape.</p>
Full article ">Figure 6
<p>(<b>a</b>) The 1 ha-field of which the simulation was done. (<b>b</b>) The effect of GSD on the estimated flight time and the number of images required for mapping this area. Here, we calculated the flight time and number of images for a multispectral camera (MicaSense RedEdge-MX Dual). The simulation was performed using the DJI Pilot app, with horizontal and vertical overlap set at 80%, and the maximum flight speed set at 5 m s<sup>−1</sup>.</p>
Full article ">Figure 7
<p>Illustration of terrain following (<b>b</b>) option relative to the standard flight height option (<b>a</b>). The colors in (<b>b</b>) represent the actual altitude of the UAV above sea level, in m. Output (print screen) of DJI Pilot 2, here for a Mavic 3E (RGB) camera with 70% overlap and GSD of 2.7 cm.</p>
Full article ">Figure 8
<p>(<b>a</b>) Schematic figure of a standard (parallel) mapping mission over a target area (orange line) with the planned locations for image capture (dots) illustrating the vertical and horizontal overlap. (<b>b</b>) The same area, but now covered with a grid flight pattern (<a href="#sec3dot5dot1-remotesensing-17-00606" class="html-sec">Section 3.5.1</a>).</p>
Full article ">Figure 9
<p>(<b>a</b>) The number of images collected for mapping a 1 ha area (100 m × 100 m field, see <a href="#remotesensing-17-00606-f006" class="html-fig">Figure 6</a>) with a MicaSense RedEdge-MX multispectral camera as a function of the vertical and horizontal overlap. Image number was estimated in DJI Pilot. (<b>b</b>) Simulated number of cameras seen per point for the same range of overlap and camera. (<b>c</b>) Simulated coordinates of the cameras (in m) seeing the center point (black +, relative coordinates of (0,0)) for different overlaps (same horizontal and vertical overlap, see color scale) for the same camera, flown at 50 m flight height.</p>
Full article ">Figure 10
<p>Adjusted overlap (overlap needed to be given as input in the flight app) as a function of flight height and vegetation height, when the target overlap is 80%.</p>
Full article ">Figure 11
<p>Orthomosaic (<b>a</b>) full field; (<b>b</b>) detail) of a flight generated with a flight overlap of 80% in horizontal and vertical direction. The yellow lines indicate the area taken from each single image. Notice the constant pattern in the core of the images, whereas the edges typically have larger areas from a single image, increasing the risk of anisotropic effects. Image taken from Agisoft Metashape from a dataset of multispectral imagery (MicaSense RedEdge-MX Dual), acquired on 07/10/2024, over a potato field in Bottelare, Belgium (lat: 50.9612°N, lon: 3.7677°E), at a flight height of 32 m.</p>
Full article ">Figure 12
<p>Illustration of different viewing angle options available. (<b>a</b>) standard nadir option; (<b>b</b>) Limited number of oblique images from a single direction (“Elevation Optimization”) and (<b>c</b>–<b>f</b>) oblique mapping under four different viewing angles. Output (print screen) of DJI Pilot 2 app, here for a Zenmuse P1 RGB camera (50 mm lens) on a DJI M350, with 65% horizontal and vertical overlap and a GSD of 0.22 cm.</p>
Full article ">Figure 13
<p>Schematic overview of corrections of thermal measurements atmospheric correction (L<sub>atm</sub>, τ) and the additional correction for emissivity (ε) and longwave incoming radiation (L<sub>in</sub>, W m<sup>−2</sup>) needed to retrieve surface temperature (T<sub>s</sub>, K). (L<sub>sensor</sub> = ad-sensor radiance, W m<sup>−2</sup>; L<sub>atm</sub>= upwelling ad-sensor radiance, W m<sup>−2</sup>; τ = atmospheric transmittance (-), σ = Stefan–Boltzmann constant = 5.67 10<sup>−8</sup> W m<sup>−2</sup> K<sup>−4</sup>).</p>
Full article ">Figure 14
<p>Overall summary of flight settings and flight conditions for the different applications. * More for larger or complex terrains.</p>
Full article ">
14 pages, 7427 KiB  
Article
Spectral Bidirectional Reflectance Distribution Function Simplification
by Shubham Chitnis, Aditya Sole and Sharat Chandran
J. Imaging 2025, 11(1), 18; https://doi.org/10.3390/jimaging11010018 - 11 Jan 2025
Viewed by 515
Abstract
Non-diffuse materials (e.g., metallic inks, varnishes, and paints) are widely used in real-world applications. Accurate spectral rendering relies on the bidirectional reflectance distribution function (BRDF). Current methods of capturing the BRDFs have proven to be onerous in accomplishing quick turnaround time, from conception [...] Read more.
Non-diffuse materials (e.g., metallic inks, varnishes, and paints) are widely used in real-world applications. Accurate spectral rendering relies on the bidirectional reflectance distribution function (BRDF). Current methods of capturing the BRDFs have proven to be onerous in accomplishing quick turnaround time, from conception and design to production. We propose a multi-layer perceptron for compact spectral material representations, with 31 wavelengths for four real-world packaging materials. Our neural-based scenario reduces measurement requirements while maintaining significant saliency. Unlike tristimulus BRDF acquisition, this spectral approach has not, to our knowledge, been previously explored with neural networks. We demonstrate compelling results for diffuse, glossy, and goniochromatic materials. Full article
(This article belongs to the Special Issue Imaging Technologies for Understanding Material Appearance)
Show Figures

Figure 1

Figure 1
<p>Packaging for fragrance bottles.</p>
Full article ">Figure 2
<p>Goniochromatic materials have complex appearances. For example, butterfly wings, oil films, opal, and man-made textiles, such as saris seen in social contexts in India, exhibit this effect. For the same incident angle of illumination (<math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>i</mi> </msub> <mo>=</mo> <mo>−</mo> <msup> <mn>45</mn> <mo>°</mo> </msup> </mrow> </semantics></math>), depending on the outgoing angle (<math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>r</mi> </msub> <mo>=</mo> <mo>−</mo> <msup> <mn>60</mn> <mo>°</mo> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mo>−</mo> <msup> <mn>25</mn> <mo>°</mo> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <msup> <mn>0</mn> <mo>°</mo> </msup> </semantics></math>), the measured spectral BRDF of <span class="html-italic">our</span> packaging material exhibits vastly different values (note the trough in the <math display="inline"><semantics> <mrow> <mo>−</mo> <msup> <mn>25</mn> <mo>°</mo> </msup> </mrow> </semantics></math> case compared to the <math display="inline"><semantics> <mrow> <mo>−</mo> <msup> <mn>60</mn> <mo>°</mo> </msup> </mrow> </semantics></math>). (Angles are in the plane of the surface normal and light directions, and the negative values represent a convention in our measurement setup, which is based on the reference azimuthal angle).</p>
Full article ">Figure 3
<p>We rendered the <span class="html-italic">Pontiac GTO 67</span> scene [<a href="#B28-jimaging-11-00018" class="html-bibr">28</a>] using the Mitsuba 3 [<a href="#B29-jimaging-11-00018" class="html-bibr">29</a>] physically accurate rendering engine under different environmental maps [<a href="#B30-jimaging-11-00018" class="html-bibr">30</a>] to show subtle but important differences in appearance.</p>
Full article ">Figure 4
<p>Our diverse packaging materials. Bidirectional reflectance (f) plots for an incoming angle <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>i</mi> </msub> <mo>=</mo> <mo>−</mo> <msup> <mn>45</mn> <mo>°</mo> </msup> </mrow> </semantics></math> and various outgoing angles <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>r</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Visualization of the four packaging materials from <a href="#sec2dot2-jimaging-11-00018" class="html-sec">Section 2.2</a>.</p>
Full article ">Figure 6
<p>(<b>a</b>) The GCMS goniospectrophotometer prototype [<a href="https://www.mcrl.co.jp" target="_blank">https://www.mcrl.co.jp</a> (accessed on 25 November 2024)]. (<b>b</b>) Spectral power distribution (SPD) of the point light source.</p>
Full article ">Figure 7
<p>Prototype MLP.</p>
Full article ">Figure 8
<p>Measured (bold line) and predicted (dotted line) spectral BRDF <math display="inline"><semantics> <msub> <mi>f</mi> <mi>r</mi> </msub> </semantics></math>, (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>400</mn> <mo>,</mo> <mn>450</mn> <mo>,</mo> <mn>500</mn> <mo>,</mo> <mn>550</mn> <mo>,</mo> <mn>600</mn> <mo>,</mo> <mn>650</mn> <mo>,</mo> <mi>and</mi> <mspace width="4.pt"/> <mn>700</mn> </mrow> </semantics></math>) for the <span class="html-italic">Gold</span> packaging sample.</p>
Full article ">Figure 9
<p>Box-and-whisker plots showing relative RRMSE for <span class="html-italic">Gold</span> (<b>a</b>) and <span class="html-italic">Gonio</span> (<b>b</b>) across wavelengths. Only test data (see <a href="#jimaging-11-00018-t001" class="html-table">Table 1</a>) are used to calculate the error metric. The figure also shows (dotted blue) the true measured value of the BRDF; the units in blue are, of course, completely different from the units in magenta. As an example for <span class="html-italic">Gold</span>, the network could predict for <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> nm the BRDF as <math display="inline"><semantics> <mrow> <mn>0.7084</mn> </mrow> </semantics></math> instead of <math display="inline"><semantics> <mrow> <mn>0.7</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>(<b>a</b>) Relative RMSE error plots for <span class="html-italic">Tungsten Carbide</span>, which are similar to <a href="#jimaging-11-00018-f009" class="html-fig">Figure 9</a>. (<b>b</b>) <span class="html-italic">Tungsten Carbide</span> rendered using Mitsuba.</p>
Full article ">Figure 11
<p>Mean squared error plots on the test set as the MLP training progresses.</p>
Full article ">
27 pages, 5909 KiB  
Article
A Phenologically Simplified Two-Stage Clumping Index Product Derived from the 8-Day Global MODIS-CI Product Suite
by Ge Gao, Ziti Jiao, Zhilong Li, Chenxia Wang, Jing Guo, Xiaoning Zhang, Anxin Ding, Zheyou Tan, Sizhe Chen, Fangwen Yang and Xin Dong
Remote Sens. 2025, 17(2), 233; https://doi.org/10.3390/rs17020233 - 10 Jan 2025
Viewed by 415
Abstract
The clumping index (CI) is a key structural parameter that quantifies the nonrandomness of the spatial distribution of vegetation canopy leaves. Investigating seasonal variations in the CI is crucial, especially for estimating the leaf area index (LAI) and studying global carbon and water [...] Read more.
The clumping index (CI) is a key structural parameter that quantifies the nonrandomness of the spatial distribution of vegetation canopy leaves. Investigating seasonal variations in the CI is crucial, especially for estimating the leaf area index (LAI) and studying global carbon and water cycles. However, accurate estimations of the seasonal CI have substantial challenges, e.g., from the need for accurate hot spot measurements, i.e., the typical feature of the bidirectional reflectance distribution function (BRDF) shape in the current CI algorithm framework. Therefore, deriving a phenologically simplified stable CI product from a high-frequency CI product (e.g., 8 days) to reduce the uncertainty of CI seasonality and simplify CI applications remains important. In this study, we applied the discrete Fourier transform and an improved dynamic threshold method to estimate the start of season (SOS) and end of season (EOS) from the CI time series and indicated that the CI exhibits significant seasonal variation characteristics that are generally consistent with the MODIS land surface phenology (LSP) product (MCD12Q2), although seasonal differences between them probably exist. Second, we divided the vegetation cycle into two phenological stages based on the MODIS LSP product, ignoring the differences mentioned above, i.e., the leaf-on season (LOS, from greenup to dormancy) and the leaf-off season (LFS, after dormancy and before greenup of the next vegetation cycle), and developed the phenologically simplified two-stage CI product for the years 2001–2020 using the MODIS 8-day CI product suite. Finally, we assessed the accuracy of this CI product (RMSE = 0.06, bias = 0.01) via 95 datasets from 14 field-measured sites globally. This study revealed that the CI exhibited an approximately inverse trend in terms of phenological variation compared with the NDVI. Globally, based on the phenologically simplified two-stage CI product, the CILOS is smaller than the CILFS across all land cover types. Compared with the LFS stage, the quality for this CI product is better in the LOS stage, where the QA is basically identified as 0 and 1, accounting for more than ~90% of the total quality flag, which is significantly higher than that in the LFS stage (~60%). This study provides relatively reliable CI datasets that capture the general trend of seasonal CI variations and simplify potential applications in modeling ecological, meteorological, and other surface processes at both global and regional scales. Therefore, this study provides both new perspectives and datasets for future research in relation to CI and other biophysical parameters, e.g., the LAI. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Distribution of collected field CI measurements (red dots) and typical pixels (white triangles) for all IGBP classes at the global scale. These datasets are mainly distributed along the mid-latitudes where vegetation seasonality tends to be easily identified.</p>
Full article ">Figure 2
<p>Seasonal variation analysis of the CI and flow chart of the MODIS time-share two-stage CI product.</p>
Full article ">Figure 3
<p>Diagram of the phenometrics retrieved for a single hypothetical vegetation cycle of the MODIS LSP product (MCD12Q2, V061).</p>
Full article ">Figure 4
<p>Accuracy evaluation results of the estimation of typical pixel vegetation phenology parameters from the CI time series. The accuracy evaluation metrics are the root mean square error (RMSE) and bias, with the minimum errors for each IGBP class highlighted by a bold square, and the smaller the error is, the smaller the square. Red indicates overestimated SOS and EOS values, whereas blue represents underestimated values. ENF: evergreen needleleaf forests; EBF: evergreen broadleaf forests; DNF: deciduous needleleaf forests; DBF: deciduous broadleaf forests; MF: mixed forests; Csh: closed shrublands; Osh: open shrublands; Wsa: woody savannas; Sav: savannas; GL: grasslands; PWe: permanent wetlands; CL<sup>1</sup>: annual croplands; CL<sup>2-1</sup>: the first vegetation cycles of biannual cropland; CL<sup>2-2</sup>: the second vegetation cycles of biannual cropland; CVM: cropland/natural vegetation mosaics. (<b>a</b>) RMSE of the estimated SOS from the CI. (<b>b</b>) RMSE of the estimated EOS from the CI. (<b>c</b>) Bias of the estimated SOS from the CI. (<b>d</b>) Bias of the estimated EOS from the CI.</p>
Full article ">Figure 5
<p>Accuracy evaluation results of the estimation of typical pixel vegetation phenology parameters from the NDVI time series. (<b>a</b>) RMSE of the estimated SOS from the NDVI. (<b>b</b>) RMSE of the estimated EOS from the NDVI. (<b>c</b>) Bias of the estimated SOS from the NDVI. (<b>d</b>) Bias of the estimated EOS from the NDVI.</p>
Full article ">Figure 6
<p>The accuracy evaluation results compared the time-share two-stage CIs with field-measured CIs, marking data overestimated or underestimated by more than 0.1 with gray dots (kinds of outliers). The values in parentheses represent the accuracy evaluation results after removing the gray-dotted data.</p>
Full article ">Figure 7
<p>Temporal variation in the time-share two-stage CIs at the (<b>a</b>) mixed forest, (<b>b</b>) woody savanna, (<b>c</b>) deciduous broadleaf forest, and (<b>d</b>) savanna field sites. Details of the field-measured CI data are shown in <a href="#app1-remotesensing-17-00233" class="html-app">Appendix A</a>. Red pentagons indicate the field-measured CI data. The black and blue dots indicate the CI<sub>LOS</sub> and CI<sub>LFS</sub>, respectively.</p>
Full article ">Figure 8
<p>Global distribution of multiyear average time-share two-stage CIs for (<b>a</b>) LOS and (<b>b</b>) LFS in the first vegetation cycle from 2001 to 2020.</p>
Full article ">Figure 9
<p>Distribution of the multiyear average CI<sub>LOS</sub> and CI<sub>LFS</sub> across different land cover types from 2001 to 2020.</p>
Full article ">Figure 10
<p>Global distribution of the mode of QA for (<b>a</b>) LOS and (<b>b</b>) LFS, and histogram distribution of QA for the MODIS time-share two-stage CI product for (<b>c</b>) LOS and (<b>d</b>) LFS from 2001 to 2020.</p>
Full article ">
21 pages, 10149 KiB  
Article
Minimizing Seam Lines in UAV Multispectral Image Mosaics Utilizing Irradiance, Vignette, and BRDF
by Hoyong Ahn, Chansol Kim, Seungchan Lim, Cheonggil Jin, Jinsu Kim and Chuluong Choi
Remote Sens. 2025, 17(1), 151; https://doi.org/10.3390/rs17010151 - 4 Jan 2025
Viewed by 470
Abstract
Unmanned aerial vehicle (UAV) imaging provides the ability to obtain high-resolution images at a lower cost than satellite imagery and aerial photography. However, multiple UAV images need to be mosaicked to obtain images of large areas, and the resulting UAV multispectral image mosaics [...] Read more.
Unmanned aerial vehicle (UAV) imaging provides the ability to obtain high-resolution images at a lower cost than satellite imagery and aerial photography. However, multiple UAV images need to be mosaicked to obtain images of large areas, and the resulting UAV multispectral image mosaics typically contain seam lines. To address this problem, we applied irradiance, vignette, and bidirectional reflectance distribution function (BRDF) filters and performed field work using a DJI Mavic 3 Multispectral (M3M) camera to collect data. We installed a calibrated reference tarp (CRT) in the center of the collection area and conducted three types of flights (BRDF, vignette, and validation) to measure the irradiance, radiance, and reflectance—which are essential for irradiance correction—using a custom reflectance box (ROX). A vignette filter was generated from the vignette parameter, and the anisotropy factor (ANIF) was calculated by measuring the radiance at the nadir, following which the BRDF model parameters were calculated. The calibration approaches were divided into the following categories: a vignette-only process, which solely applied vignette and irradiance corrections, and the full process, which included irradiance, vignette, and BRDF. The accuracy was verified through a validation flight. The radiance uncertainty at the seam line ranged from 3.00 to 5.26% in the 80% lap mode when using nine images around the CRT, and from 4.06 to 6.93% in the 50% lap mode when using all images with the CRT. The term ‘lap’ in ‘lap mode’ refers to both overlap and sidelap. The images that were subjected to the vignette-only process had a radiance difference of 4.48–6.98%, while that of the full process images was 1.44–2.40%, indicating that the seam lines were difficult to find with the naked eye and that the process was successful. Full article
Show Figures

Figure 1

Figure 1
<p>Fieldwork equipment and flight paths for bidirectional reflectance distribution function (BRDF) dome, vignette, and validation. M3M, Mavic 3 Multispectral; RTK, real-time kinematic; CRT, calibrated reference tarp; PKNU, Pukyong National University; ROX, reflectance box; PAR, photosynthetic light. (<b>a</b>) DJI M3M with RTK. (<b>b</b>) CRT with PKNU ROX. (<b>c</b>) Pyranometer and PAR sensor. (<b>d</b>) Dome flight Top view. (<b>e</b>) Dome flight side view. (<b>f</b>) Validation flight course.</p>
Full article ">Figure 2
<p>Flowchart of this research. DN, digital number; DEM, digital elevation model; SAA, sun azimuth angle; SZA, sun zenith angle; CAA, camera azimuth angle; CZA, camera zenith angle.</p>
Full article ">Figure 3
<p>PKNU ROX field measurements: <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> <mi>R</mi> <mi>R</mi> <mi>A</mi> <mi>D</mi> </mrow> <mrow> <mi>i</mi> <mi>m</mi> <mi>a</mi> <mi>g</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math>, irradiance, radiance, and reflectance results. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>I</mi> <mi>R</mi> <mi>R</mi> <mi>A</mi> <mi>D</mi> </mrow> <mrow> <mi>i</mi> <mi>m</mi> <mi>a</mi> <mi>g</mi> <mi>e</mi> </mrow> </msub> </mrow> </semantics></math> (x: time, y: irradiance correction factor, unit: no dim). (<b>b</b>) Solar irradiance (x: time, y: irradiance, unit: μW/cm<sup>2</sup> nm). (<b>c</b>) CRT radiance (x: time, y: radiance, unit: μW/cm<sup>2</sup> nm sr). (<b>d</b>) CRT reflectance (x: time, y: reflectance, unit: %).</p>
Full article ">Figure 4
<p>The vignette filter result (<b>a</b>) and comparison of the vignette factors from EXIF data (<b>b</b>) and field observation according to the sidelap rate (%) change at the seam line (<b>c</b>). New, this research; original: from EXIF; x, band; y, vignette factor; unit: dimensionless in (<b>b</b>,<b>c</b>).</p>
Full article ">Figure 5
<p>BRDF filter results for the considered bands, with SAA = 180° and SZA = 53.05° (25 April 2024, 15:57:34). (<b>a</b>) BRDF filter. (<b>b</b>) Green BRDF. (<b>c</b>) Red BRDF. (<b>d</b>) Red-Edge BRDF. (<b>e</b>) NIR BRDF.</p>
Full article ">Figure 5 Cont.
<p>BRDF filter results for the considered bands, with SAA = 180° and SZA = 53.05° (25 April 2024, 15:57:34). (<b>a</b>) BRDF filter. (<b>b</b>) Green BRDF. (<b>c</b>) Red BRDF. (<b>d</b>) Red-Edge BRDF. (<b>e</b>) NIR BRDF.</p>
Full article ">Figure 6
<p>Images at each processing step and image of the BRDF vs. vignette differences (green band, Validation 1 Flight). (<b>a</b>) Raw image. (<b>b</b>) Raw image + vignette filter. (<b>c</b>) Raw image + vignette filter + BRDF filter. (<b>d</b>) Difference between images (<b>b</b>,<b>c</b>).</p>
Full article ">Figure 7
<p>The image radiance correction rate at the seam line (<b>a</b>) and the error between image radiance and field, measured using PKNU ROX (%) (<b>b</b>). Full: vignette, irradiance, and BRDF filters; vignette only: vignette and irradiance filters; original: original; 50% lap mode: radius 583 pixels, 6 paths, 58 photos; 80% lap mode: radius 324 pixels, 3 paths, 9 photos. x: band; y: error (%).</p>
Full article ">Figure 8
<p>Validation flight Orthomosaics obtained using different methods: Original (no processing), Vignette-Only and Irrad (Vignette and Irradiance processing), and Full (BRDF, Vignette, and Irradiance processing).</p>
Full article ">Figure 8 Cont.
<p>Validation flight Orthomosaics obtained using different methods: Original (no processing), Vignette-Only and Irrad (Vignette and Irradiance processing), and Full (BRDF, Vignette, and Irradiance processing).</p>
Full article ">
30 pages, 60239 KiB  
Article
Retrieval and Evaluation of Global Surface Albedo Based on AVHRR GAC Data of the Last 40 Years
by Shaopeng Li, Xiongxin Xiao, Christoph Neuhaus and Stefan Wunderle
Remote Sens. 2025, 17(1), 117; https://doi.org/10.3390/rs17010117 - 1 Jan 2025
Viewed by 669
Abstract
In this study, the global land surface albedo namely GAC43 was retrieved for the years 1979 to 2020 using Advanced Very High Resolution Radiometer (AVHRR) global area coverage (GAC) data onboard National Oceanic and Atmospheric Administration (NOAA) and Meteorological Operational (MetOp) satellites. We [...] Read more.
In this study, the global land surface albedo namely GAC43 was retrieved for the years 1979 to 2020 using Advanced Very High Resolution Radiometer (AVHRR) global area coverage (GAC) data onboard National Oceanic and Atmospheric Administration (NOAA) and Meteorological Operational (MetOp) satellites. We provide a comprehensive retrieval process of the GAC43 albedo, followed by a comprehensive assessment against in situ measurements and three widely used satellite-based albedo products, the third edition of the CM SAF cLoud, Albedo and surface RAdiation (CLARA-A3), the Copernicus Climate Change Service (C3S) albedo product, and MODIS BRDF/albedo product (MCD43). Our quantitative evaluations indicate that GAC43 demonstrates the best stability, with a linear trend of ±0.002 per decade at nearly all pseudo invariant calibration sites (PICS) from 1982 to 2020. In contrast, CLARA-A3 exhibits significant noise before the 2000s due to the limited availability of observations, while C3S shows substantial biases during the same period due to imperfect sensors intercalibrations. Extensive validation at globally distributed homogeneous sites shows that GAC43 has comparable accuracy to C3S, with an overall RMSE of approximately 0.03, but a smaller positive bias of 0.012. Comparatively, MCD43C3 shows the lowest RMSE (~0.023) and minimal bias, while CLARA-A3 displays the highest RMSE (~0.042) and bias (0.02). Furthermore, GAC43, CLARA-A3, and C3S exhibit overestimation in forests, with positive biases exceeding 0.023 and RMSEs of at least 0.028. In contrast, MCD43C3 shows negligible bias and a smaller RMSE of 0.015. For grasslands and shrublands, GAC43 and MCD43C3 demonstrate comparable estimation uncertainties of approximately 0.023, with close positive biases near 0.09, whereas C3S and CLARA-A3 exhibit higher RMSEs and biases exceeding 0.032 and 0.022, respectively. All four albedo products show significant RMSEs around 0.035 over croplands but achieve the highest estimation accuracy better than 0.020 over deserts. It is worth noting that significant biases are typically attributed to insufficient spatial representativeness of the measurement sites. Globally, GAC43 and C3S exhibit similar spatial distribution patterns across most land surface conditions, including an overestimation compared to MCD43C3 and an underestimation compared to CLARA-A3 in forested areas. In addition, GAC43, C3S, and CLARA-A3 estimate higher albedo values than MCD43C3 in low-vegetation regions, such as croplands, grasslands, savannas, and woody savannas. Besides the fact that the new GAC43 product shows the best stability covering the last 40 years, one has to consider the higher proportion of backup inversions before 2000. Overall, GAC43 offers a promising long-term and consistent albedo with good accuracy for future studies such as global climate change, energy balance, and land management policy. Full article
Show Figures

Figure 1

Figure 1
<p>Local solar times and solar zenith angles of equator observations for all AVHRR-carrying NOAA and MetOp satellites used to generate GAC43 albedo products as shown in (<b>a</b>,<b>b</b>), respectively. SZA &gt; 90° indicates night conditions.</p>
Full article ">Figure 2
<p>Globally distributed sites with homogeneous characteristics and corresponding land cover types defined by the IGBP from the MCD12C1 product. Purple squares located in the desert are used to evaluate temporal stability, while other sites are utilized for direct validations.</p>
Full article ">Figure 3
<p>Flowchart for this study.</p>
Full article ">Figure 4
<p>The performance of full inversion and full and backup inversion at various IGBP land cover types.</p>
Full article ">Figure 5
<p>The performance of the GAC43 albedo with full inversions at various land cover types, where panels (<b>a</b>–<b>h</b>) represent the land cover types of BSV, CRO, DBF, EBF, ENF, GRA, OSH and WSA, respectively. In the plots, the red solid line represents the 1:1 line, and the green dotted line and purple solid lines represent the limits of deviation ±0.02 and ±0.04, respectively.</p>
Full article ">Figure 6
<p>Google Earth <sup>TM</sup> images were used to visually illustrate the heterogeneity surrounding selected homogeneous sites representing various land cover types: (<b>a</b>) EBF, (<b>b</b>) BSV, (<b>c</b>) CRO and (<b>d</b>) GRA, as defined by the MCD12C1 IGBP classification. The red circle in each image denotes a radius of 2.5 km.</p>
Full article ">Figure 7
<p>Inter-comparison performance among four satellite-based albedo products. The top four subfigures (<b>a</b>–<b>d</b>) show the accuracy of all available matching samples between in situ measurements and estimated albedo values derived from satellite products, while the bottom four subfigures (<b>e</b>–<b>h</b>) give the performance of that using same samples.</p>
Full article ">Figure 8
<p>The performance of four satellite-based albedo products using same samples across various land surface types, evaluated in terms of (<b>a</b>) RMSE and (<b>b</b>) bias, respectively. The <span class="html-italic">x</span>-axis represents the land cover type classified as forest, grassland or shrublands, cropland, and desert, and corresponding available samples.</p>
Full article ">Figure 9
<p>The temporal performance of four satellite-based albedo products related to in situ measurements, and each subplot represents one case of different land cover surface, including (<b>a</b>) EBF, (<b>b</b>) ENF, (<b>c</b>) DBF, (<b>d</b>) GRA, and (<b>e</b>) CRO, respectively. The grey shaded areas depict situations with snow cover.</p>
Full article ">Figure 10
<p>Spatial distributions of GAC43 BSA in July 2013 are shown in subgraph (<b>a</b>), with corresponding differences from (<b>b</b>) CLARA-A3, (<b>c</b>) C3S, and (<b>d</b>) MCD43C3 in the same month, respectively.</p>
Full article ">Figure 11
<p>Percentage difference in BSA values between (<b>a</b>) GAC43 and CLARA-A3, (<b>b</b>) GAC43 and C3S, and (<b>c</b>) GAC43 and MCD43C3 in July 2013.</p>
Full article ">Figure 12
<p>The scattering plots between GAC43 BSA and (<b>a</b>) CLARA-A3 BSA, (<b>b</b>) C3S BSA, and (<b>c</b>) MCD43C3 BSA using all snow-free monthly pixels in July 2013, where the red lines indicate 1:1.</p>
Full article ">Figure 13
<p>The monthly BSA for the four satellite-based products across various land cover types in July 2013, where panels (<b>a</b>–<b>i</b>) represent the land cover types of CRO, DBF, DNF, EBF, ENF, GRA, MF, SAV and WSA, respectively. In the plots, the bottom values of each albedo product are the median of all corresponding land cover estimates. The top values match available samples.</p>
Full article ">Figure 14
<p>Monthly BSA from GAC43, MCD43C3, C3S, and CALRA-A3 at three randomly selected PICS sites: (<b>a</b>) Arabia 2, 20.19°N, 51.63°E; (<b>b</b>) Libya 3, 23.22°N, 23.23°E; and (<b>c</b>) Sudan 1, 22.11°N, 28.11°E, all characterized by BSV land surfaces as defined by IGBP.</p>
Full article ">Figure 15
<p>Box plots of the slope per decade for GAC43, CLARA-A3, C3S, and MCD43C3 at all PICS sites, where (<b>a</b>–<b>d</b>) represent the corresponding statistics during 1982–1990, 1991–2000, 2001–2010 and 2011–2020, respectively, and three dashed grey lines represent the 75%, 50%, and 25% quantiles. Red dotted lines indicate the horizontal line where slope is 0.</p>
Full article ">Figure 16
<p>Percentage of full inversions for the years 2004, 2008, 2012, and 2016 based on GAC43 (<b>top</b>) and MCD43A3 (<b>bottom</b>).</p>
Full article ">Figure 17
<p>Percentage of full inversions of GAC43 at various continents from 1979 to 2020.</p>
Full article ">Figure A1
<p>Spatial distributions of GAC43 BSA in July 2004 are shown in subgraph (<b>a</b>), with corresponding differences from (<b>b</b>) CLARA-A3, (<b>c</b>) C3S, and (<b>d</b>) MCD43C3 in the same month, respectively.</p>
Full article ">Figure A2
<p>Spatial distributions of GAC43 BSA in July 2008 are shown in subgraph (<b>a</b>), with corresponding differences from (<b>b</b>) CLARA-A3, (<b>c</b>) C3S, and (<b>d</b>) MCD43C3 in the same month, respectively.</p>
Full article ">Figure A3
<p>Spatial distributions of GAC43 BSA in July 2012 are shown in subgraph (<b>a</b>), with corresponding differences from (<b>b</b>) CLARA-A3, (<b>c</b>) C3S, and (<b>d</b>) MCD43C3 in the same month, respectively.</p>
Full article ">Figure A4
<p>Spatial distributions of GAC43 BSA in July 2016 are shown in subgraph (<b>a</b>), with corresponding differences from (<b>b</b>) CLARA-A3, (<b>c</b>) C3S, and (<b>d</b>) MCD43C3 in the same month, respectively.</p>
Full article ">Figure A5
<p>Percentages of full inversions for the years between 1979 and 2020 based on GAC43 data record.</p>
Full article ">
17 pages, 7823 KiB  
Article
Goniopolarimetric Properties of Typical Satellite Material Surfaces: Intercomparison with Semi-Empirical pBRDF Modeled Results
by Min Yang, Hongxia Mao, Jun Wu, Chong Zheng and Li Wang
Photonics 2025, 12(1), 17; https://doi.org/10.3390/photonics12010017 - 27 Dec 2024
Viewed by 283
Abstract
Light reflected from satellite surfaces is polarized light, which plays a crucial role in space target identification and remote sensing. To deepen our understanding of the polarized reflectance property for satellite material surface, we present the experiments of polarimetric laboratory measurements from two [...] Read more.
Light reflected from satellite surfaces is polarized light, which plays a crucial role in space target identification and remote sensing. To deepen our understanding of the polarized reflectance property for satellite material surface, we present the experiments of polarimetric laboratory measurements from two typical satellite materials in the wavelength range of 400–1000 nm by using a goniometer instrument. The bidirectional polarized reflectance factor (BPRF) is used to describe the polarization characteristics of our samples. The polarized spectral reflectance and distribution of BPRF for our datasets are analyzed. Furthermore, five semi-empirical polarized bidirectional reflectance distribution functions (pBRDFs) models for polarized reflectance of typical satellite material surfaces (Preist–Germer model, Maxwell–Beard model, three-component model, Cook–Torrance model, and Kubelka–Munk model) are quantitatively intercompared using the measured BPRFs. The results suggest that the measured BPRFs of our samples are spectrally irrelevant, and the hemispherical distribution of BPRFs is obviously anisotropic. Except for the Preist–Germer model, the other semi-empirical models are in good agreement with the measured BPRF at the selected wavelengths, indicating that we can accurately simulate the polarized reflectance property of the satellite surface by using the existing polarimetric models. The Kubelka–Munk pBRDF model best fits the silver polyimide film and white coating surfaces with RMSE equal to 3.25% and 2.03%, and the correlation coefficient is 0.994 and 0.984, respectively. This study can be applied to provide an accurate pBRDF model for space object scene simulation and has great potential for polarization remote sensing. Full article
(This article belongs to the Special Issue Polarization Optics)
Show Figures

Figure 1

Figure 1
<p>The picture of typical satellite material: (<b>a</b>) silver polyimide film, and (<b>b</b>) white coating.</p>
Full article ">Figure 2
<p>(<b>a</b>) The schematic diagram of measurement. (<b>b</b>) The grid is the hemispherical sampling pattern when the incident zenith angle is 40°.</p>
Full article ">Figure 3
<p>The BRF spectrum curves of the (<b>a</b>) silver polyimide film and (<b>b</b>) white coating at different viewing zenith angles (Δ<span class="html-italic">ϕ</span> = 180°) when the incident zenith angle is 40°.</p>
Full article ">Figure 4
<p>The DoLP curves of the (<b>a</b>) silver polyimide film and (<b>b</b>) white coating at different viewing zenith angles (Δ<span class="html-italic">ϕ</span> = 180°) when the incident zenith angle is 40°.</p>
Full article ">Figure 5
<p>The BPRF curves of the (<b>a</b>) silver polyimide film and (<b>b</b>) white coating at different viewing zenith angles (Δ<span class="html-italic">ϕ</span> = 180°) when the incident zenith angle is 40°.</p>
Full article ">Figure 6
<p>The hemisphere distribution of BPRF of the silver polyimide film at (<b>a</b>) 560 nm, (<b>b</b>) 670 nm, and (<b>c</b>) 865 nm, and the incident zenith angle is 40°.</p>
Full article ">Figure 7
<p>The hemisphere distribution of BPRF of the white coating at (<b>a</b>) 560 nm, (<b>b</b>) 670 nm, and (<b>c</b>) 865 nm, and the incident zenith angle is 40°.</p>
Full article ">Figure 8
<p>The hemisphere distribution of BPRF of the silver polyimide film at 670 nm, and the incident zenith angles are (<b>a</b>) 30°, (<b>b</b>) 50°, and (<b>c</b>) 60°.</p>
Full article ">Figure 9
<p>The hemisphere distribution of BPRF of the white coating at 670 nm, and the incident zenith angles are (<b>a</b>) 30°, (<b>b</b>) 50°, and (<b>c</b>) 60°.</p>
Full article ">Figure 10
<p>The hemisphere distribution of (<b>a</b>) the measured BPRF and (<b>b</b>–<b>f</b>) the modeled BPRF of the silver polyimide film at 865 nm, and the incident zenith angle is 40°.</p>
Full article ">Figure 11
<p>The hemisphere distribution of (<b>a</b>) the measured BPRF and (<b>b</b>–<b>f</b>) the modeled BPRF of the white coating at 865 nm, and the incident zenith angle is 40°.</p>
Full article ">Figure 12
<p>The comparison between measurements and modeled BPRF of our samples over all measured directions in all incident zenith angles.</p>
Full article ">
27 pages, 10191 KiB  
Article
Hyperspectral Remote Sensing Estimation of Rice Canopy LAI and LCC by UAV Coupled RTM and Machine Learning
by Zhongyu Jin, Hongze Liu, Huini Cao, Shilong Li, Fenghua Yu and Tongyu Xu
Agriculture 2025, 15(1), 11; https://doi.org/10.3390/agriculture15010011 - 24 Dec 2024
Viewed by 621
Abstract
Leaf chlorophyll content (LCC) and leaf area index (LAI) are crucial for rice growth and development, serving as key parameters for assessing nutritional status, growth, water management, and yield prediction. This study introduces a novel canopy radiative transfer model (RTM) by coupling the [...] Read more.
Leaf chlorophyll content (LCC) and leaf area index (LAI) are crucial for rice growth and development, serving as key parameters for assessing nutritional status, growth, water management, and yield prediction. This study introduces a novel canopy radiative transfer model (RTM) by coupling the radiation transfer model for rice leaves (RPIOSL) and unified BRDF model (UBM) models, comparing its simulated canopy hyperspectra with those from the PROSAIL model. Characteristic wavelengths were extracted using Sobol sensitivity analysis and competitive adaptive reweighted sampling methods. Using these wavelengths, rice phenotype estimation models were constructed with back propagation neural network (BPNN), extreme learning machine (ELM), and broad learning system (BLS) methods. The results indicate that the RPIOSL-UBM model’s hyperspectra closely match measured data in the 500–650 nm and 750–1000 nm ranges, reducing the root mean square error (RMSE) by 0.0359 compared to the PROSAIL model. The ELM-based models using the RPIOSL-UBM dataset proved most effective for estimating the LAI and LCC, with RMSE values of 0.6357 and 6.0101 μg · cm−2, respectively. These values show significant improvements over the PROSAIL dataset models, with RMSE reductions of 0.1076 and 6.3297 μg · cm−2, respectively. The findings demonstrate that the proposed model can effectively estimate rice phenotypic parameters from UAV-measured hyperspectral data, offering a new approach to assess rice nutritional status and enhance cultivation efficiency and yield. This study underscores the potential of advanced modeling techniques in precision agriculture. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the study area. (<b>a</b>) Vector map of Liaoning Province, with Anshan City in the yellow area; (<b>b</b>) vector map of Anshan City, with Gengzhuang Town in the pink area; (<b>c</b>) map of instrumentation, including the UAV hyperspectral acquisition system, the LAI 2200C, and the visible-ultraviolet spectrophotometer; and (<b>d</b>) map of the experimental area, with the sampling areas labelled 1–11 in the map.</p>
Full article ">Figure 2
<p>Schematic of BPNN, where blue is the input layer, pink is the implied layer, orange is the output layer, <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>x</mi> <mn>2</mn> </msub> </semantics></math> ⋯ <math display="inline"><semantics> <msub> <mi>x</mi> <mi>N</mi> </msub> </semantics></math> is the input variable, <math display="inline"><semantics> <msub> <mi>y</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>y</mi> <mn>2</mn> </msub> </semantics></math> ⋯ <math display="inline"><semantics> <msub> <mi>y</mi> <mi>N</mi> </msub> </semantics></math> is the intermediate variable, <math display="inline"><semantics> <msub> <mi>Z</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>Z</mi> <mn>2</mn> </msub> </semantics></math> ⋯ <math display="inline"><semantics> <msub> <mi>Z</mi> <mi>N</mi> </msub> </semantics></math> is the output variable, and <math display="inline"><semantics> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mi>h</mi> </mrow> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>w</mi> <mrow> <mi>h</mi> <mi>j</mi> </mrow> </msub> </semantics></math> is the weight.</p>
Full article ">Figure 3
<p>Schematic diagram of the ELM, where blue is the input layer, pink is the hidden layer, orange is the output layer, 1 ⋯ <span class="html-italic">D</span> is the input variable, 1 ⋯ <span class="html-italic">L</span> is the intermediate variable, 1 ⋯ <span class="html-italic">m</span> is the output variable, and <math display="inline"><semantics> <msub> <mi>ω</mi> <mi>L</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>b</mi> <mi>L</mi> </msub> </semantics></math> is the weight.</p>
Full article ">Figure 4
<p>Schematic diagram of BLS. (<b>a</b>) Schematic diagram of a neural network connected by traditional random vector functions, where blue is the input layer, pink is the enhancement node, and orange is the output layer; and (<b>b</b>) schematic diagram of BLS, including the input layer, feature nodes, enhancement nodes, and the output layer.</p>
Full article ">Figure 5
<p>Parameter optimization flowchart. Based on the parameters in the measured data as well as the measured spectral data, the parameters (N1, N2, etc.) were optimized based on the NSGA-III optimization algorithm with the model output spectra and the measured spectral errors as evaluation indexes.</p>
Full article ">Figure 6
<p>Technical roadmap.</p>
Full article ">Figure 7
<p>Spectral simulation and sensitivity analysis results.</p>
Full article ">Figure 8
<p>Graph of the sensitivity analysis results, where (<b>a</b>) shows the RPIOSL-UBM model sensitivity analysis results and (<b>b</b>) shows the PROSAIL model sensitivity analysis results.</p>
Full article ">Figure 9
<p>Graphs of feature wavelength screening results, where (<b>a</b>,<b>b</b>) are the LCC feature wavelength results based on the RPIOSL-UBM model screening; (<b>c</b>,<b>d</b>) are the LAI feature wavelength results based on the RPIOSL-UBM model screening; (<b>e</b>,<b>f</b>) are the LCC feature wavelength results based on the PROSAIL model screening; (<b>g</b>,<b>h</b>) are the PROSAIL model-based screening LAI characteristic wavelength results; (<b>i</b>) is a schematic diagram of the LCC characteristic wavelength band of the two RTMs; and (<b>j</b>) is a schematic diagram of the LAI characteristic wavelength band of the two RTMs. The pink lines in (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) respectively correspond to the minimum RMSECV in (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>).</p>
Full article ">Figure 10
<p>BP neural network-based estimation of LAI and LCC result plots, where (<b>a</b>–<b>f</b>) are based on the RPIOSL-UBM model to estimate LAI result plots, with the number of implied layers from 1–6, respectively; (<b>g</b>–<b>l</b>) are based on the RPIOSL-UBM model to estimate LCC result plots, with the number of implied layers from 10–60, respectively; (<b>m</b>–<b>r</b>) are the result plots of estimated LAI based on the PROSAIL model, with the depth of hidden layers from 1–6, respectively; and (<b>s</b>–<b>x</b>) are the result plots of estimated LCC based on the PROSAIL model, with the depth of hidden layers from 1–6, respectively.</p>
Full article ">Figure 11
<p>Estimated LAI and LCC result plots based on ELM, where (<b>a</b>–<b>f</b>) are estimated LAI result plots based on the RPIOSL-UBM model, with the depth of hidden layers from 10–60 respectively; (<b>g</b>–<b>l</b>) are estimated LCC result plots based on the RPIOSL-UBM model, with the depth of hidden layers from 10–60 respectively; (<b>m</b>–<b>r</b>) are estimated LCC result plots based on the PROSAIL model estimated LAI result plots, with the depth of hidden layers from 1–5 and 10, respectively; and (<b>s</b>–<b>x</b>) are estimated LCC result plots based on the PROSAIL model, with the depth of hidden layers from 1–5 and 10, respectively.</p>
Full article ">Figure 12
<p>BLS-based estimation of LAI and LCC result plots, where (<b>a</b>–<b>d</b>) are the LAI result plots based on the RPIOSL-UBM model, with regularization parameters from <math display="inline"><semantics> <msup> <mn>2</mn> <mrow> <mo>−</mo> <mn>10</mn> </mrow> </msup> </semantics></math> to <math display="inline"><semantics> <msup> <mn>2</mn> <mrow> <mo>−</mo> <mn>40</mn> </mrow> </msup> </semantics></math>, respectively; (<b>e</b>–<b>h</b>) are the LCC result plots based on the RPIOSL-UBM model, with regularization parameters from <math display="inline"><semantics> <msup> <mn>2</mn> <mrow> <mo>−</mo> <mn>10</mn> </mrow> </msup> </semantics></math> to <math display="inline"><semantics> <msup> <mn>2</mn> <mrow> <mo>−</mo> <mn>40</mn> </mrow> </msup> </semantics></math>, respectively; (<b>i</b>–<b>l</b>) are the LAI result plots based on the PROSAIL model to estimate LAI resultant plots, with regularization parameters from <math display="inline"><semantics> <msup> <mn>2</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </semantics></math> to <math display="inline"><semantics> <msup> <mn>2</mn> <mrow> <mo>−</mo> <mn>10</mn> </mrow> </msup> </semantics></math> to <math display="inline"><semantics> <msup> <mn>2</mn> <mrow> <mo>−</mo> <mn>30</mn> </mrow> </msup> </semantics></math>; and (<b>m</b>–<b>p</b>) are PROSAIL model based to estimate LCC resultant plots, with regularization parameters from <math display="inline"><semantics> <msup> <mn>2</mn> <mrow> <mo>−</mo> <mn>10</mn> </mrow> </msup> </semantics></math> to <math display="inline"><semantics> <msup> <mn>2</mn> <mrow> <mo>−</mo> <mn>40</mn> </mrow> </msup> </semantics></math>, respectively.</p>
Full article ">Figure 13
<p>Model runtime graph. Subfigure (<b>a</b>) shows the training set runtime heatmap and subfigure (<b>b</b>) shows the validation set runtime heatmap, where 1–6 denotes sets 1–6 of parameters.</p>
Full article ">
27 pages, 3310 KiB  
Article
Evaluation of Correction Algorithms for Sentinel-2 Images Implemented in Google Earth Engine for Use in Land Cover Classification in Northern Spain
by Iyán Teijido-Murias, Marcos Barrio-Anta and Carlos A. López-Sánchez
Forests 2024, 15(12), 2192; https://doi.org/10.3390/f15122192 - 12 Dec 2024
Viewed by 1065
Abstract
This study examined the effect of atmospheric, topographic, and Bidirectional Reflectance Distribution Function (BRDF) corrections of Sentinel-2 images implemented in Google Earth Engine (GEE) for use in land cover classification. The study was carried out in an area of complex orography in northern [...] Read more.
This study examined the effect of atmospheric, topographic, and Bidirectional Reflectance Distribution Function (BRDF) corrections of Sentinel-2 images implemented in Google Earth Engine (GEE) for use in land cover classification. The study was carried out in an area of complex orography in northern Spain and made use of the Spanish National Forest Inventory plots and other systematically located plots to cover non-forest classes. A total of 2991 photo-interpreted ground plots and 15 Sentinel-2 images, acquired in summer at a spatial resolution of 10–20 m per pixel, were used for this purpose. The overall goal was to determine the optimal level of image correction in GEE for subsequent use in time series analysis of images for accurate forest cover classification. Particular attention was given to the classification of cover by the major commercial forest species: Eucalyptus globulus, Eucalyptus nitens, Pinus pinaster, and Pinus radiata. The Second Simulation of the Satellite Signal in the Solar Spectrum (Py6S) algorithm, used for atmospheric correction, provided the best compromise between execution time and image size, in comparison with other algorithms such as Sentinel-2 Level 2A Processor (Sen2Cor) and Sensor Invariant Atmospheric Correction (SIAC). To correct the topographic effect, we tested the modified Sun-canopy-sensor topographic correction (SCS + C) algorithm with digital elevation models (DEMs) of three different spatial resolutions (90, 30, and 10 m per pixel). The combination of Py6S, the SCS + C algorithm and the high-spatial resolution DEM (10 m per pixel) yielded the greatest precision, which demonstrated the need to match the pixel size of the image and the spatial resolution of the DEM used for topographic correction. We used the Ross-Thick/Li-Sparse-Reciprocal BRDF to correct the variation in reflectivity captured by the sensor. The BRDF corrections did not significantly improve the accuracy of the land cover classification with the Sentinel-2 images acquired in summer; however, we retained this correction for subsequent time series analysis of the images, as we expected it to be of much greater importance in images with larger solar incidence angles. Our final proposed dataset, with image correction for atmospheric (Py6S), topographic (SCS + C), and BRDF (Ross-Thick/Li-Sparse-Reciprocal BRDF) effects and a DEM of spatial resolution 10 m per pixel, yielded better goodness-of-fit statistics than other datasets available in the GEE catalogue. The Sentinel-2 images currently available in GEE are therefore not the most accurate for constructing land cover classification maps in areas with complex orography, such as northern Spain. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow adopted in this study to analyze different combinations of Sentinel-2 imagery corrections. In Algorithm_AT00B, Algorithm_ is the name or abbreviation of the algorithm used, A denotes “atmospheric correction”, T “topographic correction”, the number 00 refers to the spatial resolution of the digital elevation model (DEM) (90, 30, and 10 m per pixel, respectively) and B refers to “application of BRDF correction”. The datasets are shown in three different colours: datasets available in the GEE repository, in blue, the dataset developed in Sentinel Application Platform—SNAP 11.0.0 and uploaded in GEE assets, in purple; and the Level 1 C datasets derived from the GEE platform, in orange. In all cases, the Random Forest algorithm was used for fitting each processing dataset.</p>
Full article ">Figure 2
<p>Overview of (<b>a</b>) the location of the study area overlapping the Spanish National Forest Inventory plots used in this study, (<b>b</b>) Sentinel-2 granules for the study area, and (<b>c</b>) location of the region of interest in northern Spain. WGS 84/UTM zone 29N (EPSG: 32629).</p>
Full article ">Figure 3
<p>Visual comparison into the 4 datasets.</p>
Full article ">Figure 4
<p>Box plots of the overall accuracy (Accuracy) of the whole land cover classification corresponding to different levels of S2 image processing: absence of atmospheric, topographic, or BRDF correction (1C), atmospheric correction with the Sen2Cor algorithm and topographic correction with the Sen2Cor algorithm with DEM of 90 m per pixel (S2C_AT90) and atmospheric correction with the Py6S algorithm, topographic correction with the SCS + C algorithm with DEM of 10 m per pixel and the BRDF correction (Py6S_AT10B). The letters at the top of the box indicate the results of Tukey’s HSD multiple comparison test (different letters indicate significant differences between the difference levels of database processing and/or correction algorithms used).</p>
Full article ">
20 pages, 16875 KiB  
Article
Pest Detection in Citrus Orchards Using Sentinel-2: A Case Study on Mealybug (Delottococcus aberiae) in Eastern Spain
by Fàtima Della Bellver, Belen Franch Gras, Italo Moletto-Lobos, César José Guerrero Benavent, Alberto San Bautista Primo, Constanza Rubio, Eric Vermote and Sebastien Saunier
Remote Sens. 2024, 16(23), 4362; https://doi.org/10.3390/rs16234362 - 22 Nov 2024
Viewed by 764
Abstract
The Delottococcus aberiae is a mealybug pest known as Cotonet de les Valls in the province of Castellón (Spain). This tiny insect is causing large economic losses in the Spanish agricultural sector, especially in the citrus industry. The European Copernicus program encourages the [...] Read more.
The Delottococcus aberiae is a mealybug pest known as Cotonet de les Valls in the province of Castellón (Spain). This tiny insect is causing large economic losses in the Spanish agricultural sector, especially in the citrus industry. The European Copernicus program encourages the progress of Earth observation (EO) in relation to the development of agricultural monitoring tools. In this context, this work is based on the analysis of the temporal evolution of spectral surface reflectance data from Sen2Like, analyzing healthy and fields affected by the mealybug. The study area is focused on the surroundings of Vall d’Uixó (Castellón, Spain), involving an approximate area of 25 ha distributed in a total of 21 fields of citrus trees with different mealybug incidence, classified as healthy or unhealthy, during the 2020–2021 season. The relationship between the mealybug infestation level and the Normalized Difference Vegetation Index (NDVI) and other optical bands (Red, NIR, SWIR, derived from Sen2Like) were analyzed by studying the time-series evolution of each parameter across the time period 2017–2022. In this study, we also demonstrate that evergreen fruit trees such as citrus, show a seasonality across the EO-based time series, which is linked to directional effects caused by the sensor–sun geometry. This can be mitigated by using a Bidirectional Reflectance Distribution Function (BRDF) model such as the High-Resolution Adjusted BRDF Algorithm (HABA). To study the infested fields separately from healthy ones and avoid mixing fields with very different spectral responses caused by field type, separation between rows, or age, we studied the evolution of each parcel separately using monthly linear regressions, considering the 2017–2018 seasons as a reference when the pest had not developed yet. The observations indicate the feasibility of the distinction between affected and healthy plots during a year utilizing specific spectral ranges, with SWIR proving a notably effective channel, enabling separability from mid-summer to the fall. Furthermore, the anomaly inspection demonstrates an increase in the effects of the pest from 2020 to 2022 in all spectral regions and enables a first approximation for identifying healthy and affected fields based on negative anomalies in the red and SWIR channels and positive anomalies in the NIR and NDVI. This work contributes to the development of new monitoring tools for efficient and sustainable action in pest control. Full article
Show Figures

Figure 1

Figure 1
<p>Image of the <span class="html-italic">Delottococcus aberiae</span> pest. The mealybug (white insects) can be seen; it corresponds to an adult female.</p>
Full article ">Figure 2
<p>Study area with 24.6 hectares distributed across 21 parcels: 12 affected by mealybug pest (red) and 9 healthy (blue).</p>
Full article ">Figure 3
<p>Temporal evolution of SWIR reflectance means (with standard deviation) by plot. Red signals correspond to affected parcels and blue signals to healthy ones.</p>
Full article ">Figure 4
<p>Overview of the methodology workflow.</p>
Full article ">Figure 5
<p>NBAR (Nadir BRDF-Adjusted Reflectance) and SDR <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> (directional surface reflectance) comparison for red, NIR, and SWIR channels (bands 4, 8A, 11).</p>
Full article ">Figure 6
<p>Comparison of NBAR and SDR <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> NIR evolution and differences in healthy (subfigures <b>a</b> and <b>c</b>, respectively) and affected parcels (subfigures <b>b</b> and <b>d</b> respectively).</p>
Full article ">Figure 7
<p>Temporal evolution of NBAR means for healthy (blue) and affected (red) parcels. The dots are the means of each channel and the shaded areas are the means of the standard deviations.</p>
Full article ">Figure 8
<p>Temporal evolutions of differences (dots) between NBAR means for healthy and affected parcels. A linear regression is also performed (dashed line) in each case.</p>
Full article ">Figure 8 Cont.
<p>Temporal evolutions of differences (dots) between NBAR means for healthy and affected parcels. A linear regression is also performed (dashed line) in each case.</p>
Full article ">Figure 9
<p>Slope evolution for parcel-by-parcel monthly trends in a box plot representation. Blue boxes refer to healthy parcels and red boxes to affected ones.</p>
Full article ">Figure 10
<p>Comparison of the evolution of anomalies in healthy (<b>a</b>) and affected (<b>b</b>) parcels for red channel.</p>
Full article ">Figure 11
<p>Comparison of the evolution of anomalies in healthy (<b>a</b>) and affected (<b>b</b>) parcels for NIR channel.</p>
Full article ">Figure 12
<p>Comparison of the evolution of anomalies in healthy and affected parcels for SWIR channel.</p>
Full article ">Figure 13
<p>Comparison of the evolution of anomalies in healthy and affected parcels for NDVI.</p>
Full article ">Figure 14
<p>Anomalies per plot for the SWIR channel and the month of October of 2022. Blue borders indicate healthy parcels and red borders refer to affected ones.</p>
Full article ">
21 pages, 10992 KiB  
Article
Radiometric Cross-Calibration of HJ-2A/CCD3 Using the Random Forest Algorithm and a Spectral Interpolation Convolution Method with Sentinel-2/MSI
by Xiang Zhou, Yidan Chen, Yong Xie, Jie Han and Wen Shao
Remote Sens. 2024, 16(22), 4337; https://doi.org/10.3390/rs16224337 - 20 Nov 2024
Viewed by 721
Abstract
In the process of radiometric calibration, the corrections for bidirectional reflectance distribution functions (BRDFs) and spectral band adjustment factors (SBAFs) are crucial. Time-series MODIS images are commonly used to construct BRDFs by using the Ross–Li model in current research. However, the Ross–Li BRDF [...] Read more.
In the process of radiometric calibration, the corrections for bidirectional reflectance distribution functions (BRDFs) and spectral band adjustment factors (SBAFs) are crucial. Time-series MODIS images are commonly used to construct BRDFs by using the Ross–Li model in current research. However, the Ross–Li BRDF model is based on the linear relationship between the kernel models and is unable to take into account the nonlinear relationship between them. Furthermore, when using SBAF to account for spectral difference, a radiative transfer model is often used, but it requires many parameters to be set, which may introduce more errors and reduce the calibration accuracy. To address these issues, the random forest algorithm and a spectral interpolation convolution method using the Sentinel-2/multispectral instrument (MSI) are proposed in this study, in which the HuanJing-2A (HJ-2A)/charge-coupled device (CCD3) sensor is taken as an example, and the Dunhuang radiometric calibration site (DRCS) is used as a radiometric delivery platform. Firstly, a BRDF model by using the random forest algorithm of the DRCS is constructed using time-series MODIS images, which corrects the viewing geometry difference. Secondly, the BRDF correction coefficients, MSI reflectance, and relative spectral responses (RSRs) of CCD3 are used to correct the spectral differences. Finally, with the validation results, the maximum relative error between the calibration results of the proposed method and the official calibration coefficients (OCCs) published by the China Centre for Resources Satellite Data and Application (CRESDA) is 3.38%. When tested using the Baotou sandy site, the proposed method is better than the OCCs of the average relative errors calculated for all the bands except for the near-infrared (NIR) band, which has a larger error. Additionally, the effects of the light-matching method and the radiative transfer method, different approaches to constructing the BRDF model, using SBAF to account for spectral differences, different BRDF sources, as well as the imprecise viewing geometrical parameters, spectral interpolation method, and geometric positioning error, on the calibration results are analyzed. Results indicate that the cross-calibration coefficients obtained using the random forest algorithm and the proposed spectral interpolation method are more applicable to the CCD3; thus, they also account for the nonlinear relationships between the kernel models and reduce the error due to the radiative transfer model. The total uncertainty of the proposed method in all bands is less than 5.16%. Full article
Show Figures

Figure 1

Figure 1
<p>RSRs of CCD3 and MSI.</p>
Full article ">Figure 2
<p>DRCS from CCD3 image on 9 May 2022.</p>
Full article ">Figure 3
<p>Baotou sandy site from MSI image on 2 March 2022.</p>
Full article ">Figure 4
<p>The geometry information of time-series MODIS images in 2022.</p>
Full article ">Figure 5
<p>Flowchart of the experiment.</p>
Full article ">Figure 6
<p>Fitting of cross-calibration coefficients for each band of the CCD3.</p>
Full article ">Figure 7
<p>Average relative error of the ICRs with reflectance calculated based on OCCs and FCCs.</p>
Full article ">Figure 8
<p>Average relative error of ICRs with reflectance calculated by OCCs and other cross-calibration coefficients after fitting.</p>
Full article ">Figure 9
<p>Continuous spectral curves after cubic polynomial interpolation for MOD09GA on calibration days.</p>
Full article ">Figure 10
<p>The reflectance information of time-series MSI images in 2022.</p>
Full article ">
24 pages, 7512 KiB  
Article
Color Reproduction of Chinese Painting Under Multi-Angle Light Source Based on BRDF
by Xinting Li, Jie Feng and Jie Liu
Photonics 2024, 11(11), 1089; https://doi.org/10.3390/photonics11111089 - 20 Nov 2024
Viewed by 625
Abstract
It is difficult to achieve high-precision color reproduction using traditional color reproduction methods when the angle is changed, and, for large-sized artefacts, it is also significantly difficult to collect a large amount of data and reproduce the colors. In this paper, we use [...] Read more.
It is difficult to achieve high-precision color reproduction using traditional color reproduction methods when the angle is changed, and, for large-sized artefacts, it is also significantly difficult to collect a large amount of data and reproduce the colors. In this paper, we use three Bidirectional Reflectance Distribution Function (BRDF) modeling methods based on spectral imaging techniques, namely, the five-parameter model, the Cook–Torrance model and the segmented linear interpolation model. We investigated the color reproduction of color chips with matte surfaces and Chinese paintings with rough surfaces under unknown illumination angles. Experiments have shown that all three models can effectively perform image reconstruction under small illumination angle intervals. The segmented linear interpolation model exhibits a higher stability and accuracy in color reconstruction under small and large illumination angle intervals; it can not only reconstruct color chips and Chinese painting images under any illumination angle, but also achieve high-quality image color reconstruction standards in terms of objective data and intuitive perception. The best test model (segmented linear interpolation) performs well in reconstruction, reconstructing Chinese paintings at 65° and 125° with an illumination angle interval of 10°. The average RMSE of the selected reference color blocks is 0.0450 and 0.0589, the average CIEDE2000 color difference is 1.07 and 1.50, and the SSIM values are 0.9227 and 0.9736, respectively. This research can provide a theoretical basis and methodological support for accurate color reproduction as well as the large-sized scientific prediction of artifacts at any angle, and has potential applications in cultural relic protection, art reproduction, and other fields. Full article
(This article belongs to the Special Issue Optical Imaging and Measurements: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Irradiance of the LED light source.</p>
Full article ">Figure 2
<p>Data acquisition schematic.</p>
Full article ">Figure 3
<p>Experimental flow chart.</p>
Full article ">Figure 4
<p>Angle relationship schematic.</p>
Full article ">Figure 5
<p>Reflectance and BRDF curves: (<b>a</b>) No. 1; (<b>b</b>) No. 26; and (<b>c</b>) No. 31.</p>
Full article ">Figure 6
<p>The variation of BRDF value with wavelength and light source irradiation angle at a 90° viewing angle: (<b>a</b>) red color chip; (<b>b</b>) green color chip; and (<b>c</b>) blue color chip.</p>
Full article ">Figure 7
<p>Modeling results: (<b>a</b>) No. 1 at 68°; (<b>b</b>) No. 1 at 128°; (<b>c</b>) No. 26 at 68°; (<b>d</b>) No. 26 at 128°; (<b>e</b>) No. 31 at 68°; and (<b>f</b>) No. 31 at 128°.</p>
Full article ">Figure 8
<p>RMSE of model reconstruction at 5° illumination angle intervals: (<b>a</b>) 68°; and (<b>b</b>) 128°.</p>
Full article ">Figure 9
<p>RGB spatial modeling results: (<b>a</b>) the original direct measurement at 68°; (<b>b</b>) the reconstruction result of the five-parameter model at 68°; (<b>c</b>) the reconstruction result of the Cook–Torrance model at 68°; (<b>d</b>) the reconstruction result of the segmented linear interpolation at 68°; (<b>e</b>) the original direct measurement at 128°; (<b>f</b>) the reconstruction result of the five-parameter model at 128°; (<b>g</b>) the reconstruction result of the Cook–Torrance model at 128° Cook–Torrance model reconstruction results; and (<b>h</b>) segmented linear interpolation reconstruction results at 128°.</p>
Full article ">Figure 10
<p>CIEDE2000 color difference of model reconstruction at 5° illumination angle intervals: (<b>a</b>) 68°; and (<b>b</b>) 128°.</p>
Full article ">Figure 11
<p>Modeling results: (<b>a</b>) No. 1 at 68°; (<b>b</b>) No. 1 at 128°; (<b>c</b>) No. 26 at 68°; (<b>d</b>) No. 26 at 128°; (<b>e</b>) No. 31 at 68°; and (<b>f</b>) No. 31 at 128°.</p>
Full article ">Figure 12
<p>RMSE of model reconstruction at 5° illumination angle intervals: (<b>a</b>) 68°; and (<b>b</b>) 128°.</p>
Full article ">Figure 12 Cont.
<p>RMSE of model reconstruction at 5° illumination angle intervals: (<b>a</b>) 68°; and (<b>b</b>) 128°.</p>
Full article ">Figure 13
<p>RGB spatial modeling results: (<b>a</b>) the original direct measurement at 68°; (<b>b</b>) the reconstruction result of the five-parameter model at 68°; (<b>c</b>) the reconstruction result of the Cook–Torrance model at 68°; (<b>d</b>) the reconstruction result of the segmented linear interpolation at 68°; (<b>e</b>) the original direct measurement at 128°; (<b>f</b>) the reconstruction result of the five-parameter model at 128°; (<b>g</b>) the reconstruction result of the Cook–Torrance model at 128° Cook–Torrance model reconstruction results; and (<b>h</b>) segmented linear interpolation reconstruction results at 128°.</p>
Full article ">Figure 14
<p>CIEDE2000 color difference of model reconstruction at 10° illumination angle intervals: (<b>a</b>) 68°; and (<b>b</b>) 128°.</p>
Full article ">Figure 15
<p>Chinese painting with marked color block position information.</p>
Full article ">Figure 16
<p>Modeling results from a 90° viewing angle: (<b>a</b>) No. 4 at 65°; (<b>b</b>) No. 4 at 125°; (<b>c</b>) No. 8 at 65°; (<b>d</b>) No. 8 at 125°; (<b>e</b>) No. 19 at 65°; and (<b>f</b>) No. 19 at 125°.</p>
Full article ">Figure 17
<p>RMSE for spectral reconstruction of Chinese paintings: (<b>a</b>) 65°; and (<b>b</b>) 125°.</p>
Full article ">Figure 18
<p>Chinese painting modeling results: (<b>a</b>) the original image of direct measurement at 65°; (<b>b</b>) the reconstruction result of segmented linear interpolation at 65°; (<b>c</b>) the original image of direct measurement at 125°; and (<b>d</b>) the reconstruction result of segmented linear interpolation at 125°.</p>
Full article ">Figure 19
<p>CIEDE2000 color difference for color reconstruction of Chinese paintings: (<b>a</b>) 65°; and (<b>b</b>) 125°.</p>
Full article ">
21 pages, 8760 KiB  
Article
Research on the Laser Scattering Characteristics of Three-Dimensional Imaging Based on Electro–Optical Crystal Modulation
by Houpeng Sun, Yingchun Li, Huichao Guo, Chenglong Luan, Laixian Zhang, Haijing Zheng and Youchen Fan
Micromachines 2024, 15(11), 1327; https://doi.org/10.3390/mi15111327 - 30 Oct 2024
Viewed by 832
Abstract
In this paper, we construct a laser 3D imaging simulation model based on the 3D imaging principle of electro–optical crystal modulation. Unlike the traditional 3D imaging simulation method, this paper focuses on the laser scattering characteristics of the target scene. To accurately analyze [...] Read more.
In this paper, we construct a laser 3D imaging simulation model based on the 3D imaging principle of electro–optical crystal modulation. Unlike the traditional 3D imaging simulation method, this paper focuses on the laser scattering characteristics of the target scene. To accurately analyze and simulate the scattering characteristic model of the target under laser irradiation, we propose a BRDF (Bidirectional Reflectance Distribution Function) model fitting algorithm based on the hybrid BBO–Firefly model, which can accurately simulate the laser scattering distribution of the target at different angles. Finally, according to the fitted scattering characteristic model, we inverted the target imaging gray map. We used the laser 3D imaging restoration principle to reconstruct the 3D point cloud of the target to realize the laser 3D imaging of the target. Full article
(This article belongs to the Special Issue Optical and Laser Material Processing)
Show Figures

Figure 1

Figure 1
<p>Principle diagram of 3D imaging based on EO crystal modulation.</p>
Full article ">Figure 2
<p>The light intensity distribution of the EO crystal modulation [<a href="#B18-micromachines-15-01327" class="html-bibr">18</a>].</p>
Full article ">Figure 3
<p>Schematic diagram of EO crystal-modulated 3D imaging range information recovery.</p>
Full article ">Figure 4
<p>Distance grayscale curve of a trapezoid.</p>
Full article ">Figure 5
<p>Schematic diagram of the reflection on the surface of an object.</p>
Full article ">Figure 6
<p>The geometric relationship of the BRDF.</p>
Full article ">Figure 7
<p>Five-parameter BRDF.</p>
Full article ">Figure 8
<p>BRDF measurement system.</p>
Full article ">Figure 9
<p>(<b>a</b>) Lambertian plate. (<b>b</b>) Experimental measurement and theoretical values.</p>
Full article ">Figure 10
<p>Species model of a habitat.</p>
Full article ">Figure 11
<p>Flowchart of the hybrid BBO–Firefly algorithm.</p>
Full article ">Figure 12
<p>BRDF measurements and fitting curves for aluminum plates and gold foils.</p>
Full article ">Figure 13
<p>BRDF parameter optimization convergence curve.</p>
Full article ">Figure 14
<p>BRDF simulation measurement.</p>
Full article ">Figure 14 Cont.
<p>BRDF simulation measurement.</p>
Full article ">Figure 15
<p>Architecture diagram of laser 3D imaging simulation model.</p>
Full article ">Figure 16
<p>CALIPSO satellite model.</p>
Full article ">Figure 17
<p>Laser 3D imaging target model.</p>
Full article ">Figure 18
<p>The results of laser 3D imaging (conventional methods).</p>
Full article ">Figure 19
<p>The results of laser 3D imaging (our methods).</p>
Full article ">Figure 20
<p>EO crystal modulation 2D imaging map.</p>
Full article ">Figure 21
<p>Three-dimensional point cloud recovery map of the imaging target.</p>
Full article ">
13 pages, 5013 KiB  
Article
Influence of Target Surface BRDF on Non-Line-of-Sight Imaging
by Yufeng Yang, Kailei Yang and Ao Zhang
J. Imaging 2024, 10(11), 273; https://doi.org/10.3390/jimaging10110273 - 29 Oct 2024
Cited by 1 | Viewed by 917
Abstract
The surface material of an object is a key factor that affects non-line-of-sight (NLOS) imaging. In this paper, we introduce the bidirectional reflectance distribution function (BRDF) into NLOS imaging to study how the target surface material influences the quality of NLOS images. First, [...] Read more.
The surface material of an object is a key factor that affects non-line-of-sight (NLOS) imaging. In this paper, we introduce the bidirectional reflectance distribution function (BRDF) into NLOS imaging to study how the target surface material influences the quality of NLOS images. First, the BRDF of two surface materials (aluminized insulation material and white paint board) was modeled using deep neural networks and compared with a five-parameter empirical model to validate the method’s accuracy. The method was then applied to fit BRDF data for different common materials. Finally, NLOS target simulations with varying surface materials were reconstructed using the confocal diffusion tomography algorithm. The reconstructed NLOS images were classified via a convolutional neural network to assess how different surface materials impacted imaging quality. The results show that image clarity improves when decreasing the specular reflection and increasing the diffuse reflection, with the best results obtained for surfaces exhibiting a high diffuse reflection and no specular reflection. Full article
Show Figures

Figure 1

Figure 1
<p>Geometric diagram of BRDF.</p>
Full article ">Figure 2
<p>Model of CDT imaging.</p>
Full article ">Figure 3
<p>Modeling of BRDF based on deep neural network.</p>
Full article ">Figure 4
<p>Schematic diagram of convolution operation.</p>
Full article ">Figure 5
<p>Comparison of BRDF data for aluminized insulation material under two models. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>θ</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> = 30°; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>θ</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> = 45°; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>θ</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> = 60°.</p>
Full article ">Figure 6
<p>Comparison of BRDF data for white paint board under two models. (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>θ</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> = 30°; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>θ</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> = 45°; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>θ</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> = 60°.</p>
Full article ">Figure 7
<p>BRDF fitting results for different target objects. (<b>a</b>) BRDF fitting results for seven target objects; (<b>b</b>) BRDF for three instances.</p>
Full article ">Figure 8
<p>Reconstruction results of different objects.</p>
Full article ">Figure 8 Cont.
<p>Reconstruction results of different objects.</p>
Full article ">Figure 9
<p>Classification results of different BRDF surface targets.</p>
Full article ">
Back to TopTop