[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (39)

Search Parameters:
Keywords = Chinese address match

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 9167 KiB  
Review
Modeling LiDAR-Derived 3D Structural Metric Estimates of Individual Tree Aboveground Biomass in Urban Forests: A Systematic Review of Empirical Studies
by Ruonan Li, Lei Wang, Yalin Zhai, Zishan Huang, Jia Jia, Hanyu Wang, Mengsi Ding, Jiyuan Fang, Yunlong Yao, Zhiwei Ye, Siqi Hao and Yuwen Fan
Forests 2025, 16(3), 390; https://doi.org/10.3390/f16030390 - 22 Feb 2025
Viewed by 336
Abstract
The aboveground biomass (AGB) of individual trees is a critical indicator for assessing urban forest productivity and carbon storage. In the context of global warming, it plays a pivotal role in understanding urban forest carbon sequestration and regulating the global carbon cycle. Recent [...] Read more.
The aboveground biomass (AGB) of individual trees is a critical indicator for assessing urban forest productivity and carbon storage. In the context of global warming, it plays a pivotal role in understanding urban forest carbon sequestration and regulating the global carbon cycle. Recent advances in light detection and ranging (LiDAR) have enabled the detailed characterization of three-dimensional (3D) structures, significantly enhancing the accuracy of individual tree AGB estimation. This review examines studies that use LiDAR-derived 3D structural metrics to model and estimate individual tree AGB, identifying key metrics that influence estimation accuracy. A bibliometric analysis of 795 relevant articles from the Web of Science Core Collection was conducted using R Studio (version 4.4.1) and VOSviewer 1.6.20 software, followed by an in-depth review of 80 papers focused on urban forests, published after 2010 and selected from the first and second quartiles of the Chinese Academy of Sciences journal ranking. The results show the following: (1) Dalponte2016 and watershed are more widely used among 2D raster-based algorithms, and 3D point cloud-based segmentation algorithms offer greater potential for innovation; (2) tree height and crown volume are important 3D structural metrics for individual tree AGB estimation, and biomass indices that integrate these parameters can further improve accuracy and applicability; (3) machine learning algorithms such as Random Forest and deep learning consistently outperform parametric methods, delivering stable AGB estimates; (4) LiDAR data sources, point cloud density, and forest types are important factors that significantly affect the accuracy of individual tree AGB estimation. Future research should emphasize deep learning applications for improving point cloud segmentation and 3D structure extraction accuracy in complex forest environments. Additionally, optimizing multi-sensor data fusion strategies to address data matching and resolution differences will be crucial for developing more accurate and widely applicable AGB estimation models. Full article
(This article belongs to the Section Urban Forestry)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Annual publications from 2003 to 2024. (<b>b</b>) Top 8 productive journals from 2003 to 2024. The size of the circles represents the number of publications; larger circles indicate higher publication volumes.</p>
Full article ">Figure 2
<p>(<b>a</b>) Top 20 most productive countries. (<b>b</b>) Country collaboration map. The line thickness represents the strength of collaboration.</p>
Full article ">Figure 3
<p>(<b>a</b>) Top 10 most productive affiliations from 2003 to 2024. (<b>b</b>) The performances of the top 10 most productive authors from 2003 to 2024. (<b>c</b>) Affiliation co-occurrence network. (<b>d</b>) Author co-occurrence network. The size of the circles represents the publication volume, and edge thickness represents the collaboration strength.</p>
Full article ">Figure 4
<p>The distribution of reviewed studies categorized by (<b>a</b>) country and (<b>b</b>–<b>g</b>) city. The size of the circles represents the number of studies, with larger circles indicating a higher number of studies.</p>
Full article ">
21 pages, 283 KiB  
Article
Sustainability in Question: Climate Risk, Environment, Social and Governance Performance, and Tax Avoidance
by Yuxuan Zhang, Leihong Yuan, Idawati Ibrahim and Ropidah Omar
Sustainability 2025, 17(4), 1400; https://doi.org/10.3390/su17041400 - 8 Feb 2025
Viewed by 592
Abstract
This study examines whether firm managers strategically use tax avoidance to address climate risks, with a specific focus on strategies employed to reduce corporate income tax liabilities, and this study incorporates the moderating role of ESG performance and is ground in stakeholder theory [...] Read more.
This study examines whether firm managers strategically use tax avoidance to address climate risks, with a specific focus on strategies employed to reduce corporate income tax liabilities, and this study incorporates the moderating role of ESG performance and is ground in stakeholder theory to highlight the balance between sustainability and corporate profit expectations. Using the secondary data from Chinese A-listed companies during 2017–2023, the findings reveal that firms increasingly adopt tax avoidance practices in response to rising climate risks. More specifically, strong ESG performance positively moderates this relationship, underscoring its role in shaping socially and ethically responsible strategies to tackle sustainability challenges. By employing panel data analysis and addressing endogeneity through instrumental variable tests, Propensity Score Matching, and the Heckman test, this study provides robust results. These findings contribute to the literature on tax avoidance and provide practical insights for actionable ESG initiatives. For firms, these include improving transparency in tax reporting and integrating sustainability metrics into corporate ESG framework for firms. For tax authority, they involve upgrading the tax-related big data supervision system and fostering alignment between corporate practices and government policies. Full article
11 pages, 1710 KiB  
Article
Association Between Long Term Exposure to PM2.5 and Its Components on Severe Obesity in Chinese Children and Adolescents: A National Study in China
by Tongjun Guo, Tianjiao Chen, Li Chen, Jieyu Liu, Xinli Song, Yi Zhang, Ruolin Wang, Jianuo Jiang, Yang Qin, Ziqi Dong, Dengcheng Zhang, Zhiying Song, Wen Yuan, Yanhui Dong, Yi Song and Jun Ma
Children 2024, 11(12), 1536; https://doi.org/10.3390/children11121536 - 18 Dec 2024
Viewed by 617
Abstract
Background: The aim of this study was to explore the association between long-term exposure to particulate matter with an aerodynamic diameter <2.5 μm (PM2.5) and its components and severe obesity in children and adolescents. Methods: Data for children and adolescents aged [...] Read more.
Background: The aim of this study was to explore the association between long-term exposure to particulate matter with an aerodynamic diameter <2.5 μm (PM2.5) and its components and severe obesity in children and adolescents. Methods: Data for children and adolescents aged 9–18 in this cross-sectional study were obtained from the 2019 Chinese National Survey on Students’ Constitution and Health (CNSSCH). Data for PM2.5 and its components were obtained from the Tracking Air Pollution in China (TAP) dataset and matched with information on these children. Logistic regression models were used to assess the risk of severe obesity associated with long-term exposure to PM2.5 and its components. Results: A total of 160,205 children were included in the analysis. Long-term exposure to PM2.5 may increase the odds of severe childhood obesity, with this effect being more pronounced in girls. Among boys, the component with the most significant impact on severe obesity was organic matter (OM). The impact of PM2.5 and its components on severe obesity was greater in children from low economic and low parental education level households. Children with unhealthy lifestyle habits have higher odds of severe obesity due to long-term exposure to PM2.5 and its components. Conclusions: The findings of this research support the development of strategies aimed at addressing severe obesity in children, suggesting that adopting healthy lifestyle practices could mitigate the odds of severe obesity due to PM2.5 and its components. There is a need for an increased focus on children in economically underdeveloped areas and those with unhealthy lifestyle habits, particularly those in rural areas and those who do not engage in adequate physical activity or get enough sleep. Full article
(This article belongs to the Section Global Pediatric Health)
Show Figures

Figure 1

Figure 1
<p>Flow chart of study participants.</p>
Full article ">Figure 2
<p>Spatial distribution of study cities and demonstration of average PM<sub>2.5</sub> exposure of participants. (<b>A</b>) Distribution of PM<sub>2.5</sub> and its components. (<b>B</b>) Distribution of obesity at different levels.</p>
Full article ">Figure 3
<p>Odd ratios of overweight, class 1 obesity, class 2 obesity, and class 3 obesity in the higher quartile groups compared to the lowest quartile group.</p>
Full article ">Figure 4
<p>Odd ratios of severe obesity for per IQR increase in exposure to PM<sub>2.5</sub> and its components in each subgroup.</p>
Full article ">
30 pages, 12451 KiB  
Article
A Method Coupling NDT and VGICP for Registering UAV-LiDAR and LiDAR-SLAM Point Clouds in Plantation Forest Plots
by Fan Wang, Jiawei Wang, Yun Wu, Zhijie Xue, Xin Tan, Yueyuan Yang and Simei Lin
Forests 2024, 15(12), 2186; https://doi.org/10.3390/f15122186 - 12 Dec 2024
Viewed by 770
Abstract
The combination of UAV-LiDAR and LiDAR-SLAM (Simultaneous Localization and Mapping) technology can overcome the scanning limitations of different platforms and obtain comprehensive 3D structural information of forest stands. To address the challenges of the traditional registration algorithms, such as high initial value requirements [...] Read more.
The combination of UAV-LiDAR and LiDAR-SLAM (Simultaneous Localization and Mapping) technology can overcome the scanning limitations of different platforms and obtain comprehensive 3D structural information of forest stands. To address the challenges of the traditional registration algorithms, such as high initial value requirements and susceptibility to local optima, in this paper, we propose a high-precision, robust, NDT-VGICP registration method that integrates voxel features to register UAV-LiDAR and LiDAR-SLAM point clouds at the forest stand scale. First, the point clouds are voxelized, and their normal vectors and normal distribution models are computed, then the initial transformation matrix is quickly estimated based on the point pair distribution characteristics to achieve preliminary alignment. Second, high-dimensional feature weighting is introduced, and the iterative closest point (ICP) algorithm is used to optimize the distance between the matching point pairs, adjusting the transformation matrix to reduce the registration errors iteratively. Finally, the algorithm converges when the iterative conditions are met, yielding an optimal transformation matrix and achieving precise point cloud registration. The results show that the algorithm performs well in Chinese fir forest stands of different age groups (average RMSE—horizontal: 4.27 cm; vertical: 3.86 cm) and achieves high accuracy in single-tree crown vertex detection and tree height estimation (average F-score: 0.90; R2 for tree height estimation: 0.88). This study demonstrates that the NDT-VGICP algorithm can effectively fuse and collaboratively apply multi-platform LiDAR data, providing a methodological reference for accurately quantifying individual tree parameters and efficiently monitoring 3D forest stand structures. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Location of the study area: (<b>a</b>) Fujian Province of China; (<b>b</b>) Nanping City; (<b>c</b>) topographic map of Shunchang County; (<b>d</b>) aerial view of site distribution; (<b>e</b>) UAV-LiDAR, LiDAR-SLAM, and ground data survey.</p>
Full article ">Figure 2
<p>Stand conditions for (<b>a</b>) young-growth forests; (<b>b</b>) half-mature forests; (<b>c</b>) near-mature forests; (<b>d</b>) mature forests; and (<b>e</b>) over-mature forests.</p>
Full article ">Figure 3
<p>Technical flowchart.</p>
Full article ">Figure 4
<p>NDT coarse registration algorithm flowchart.</p>
Full article ">Figure 5
<p>Schematic of VGICP precision registration algorithm. (<b>a</b>) construct of the voxel grid; (<b>b</b>) downsampled of source and target point cloud; (<b>c</b>) calculation of voxel normal vectors; (<b>d</b>) construct point-voxel transformation field. The blue points in (<b>a</b>,<b>b</b>) are the original point clouds and the red points are the target point clouds. The red point in (<b>c</b>) is the nearest neighbor point cloud, the black point is the edge point cloud, and the yellow line is the voxel normal. The colored points in (<b>d</b>) are the matched point clouds.</p>
Full article ">Figure 6
<p>The technical workflow of the improved individual tree segmentation method combining the rasterized canopy height model (CHM) and point cloud clustering.</p>
Full article ">Figure 7
<p>Single wood segmentation process based on horizontal distance and edge distance. (<b>a</b>–<b>c</b>) represent the point cloud data extracted from the study object using rolling segmentation blocks.</p>
Full article ">Figure 8
<p>The registration effects of three algorithms on Chinese fir plantations across different age groups: (<b>a</b>) young-growth forests; (<b>b</b>) middle-aged forests; (<b>c</b>) near-mature forests; (<b>d</b>) mature forests; (<b>e</b>) over-mature forests. Taking plots Y-1, H-3, N-1, M-2, and O-1 as examples. Different colors represent point cloud datasets from two different platforms.</p>
Full article ">Figure 9
<p>The registration effects of three algorithms on individual Chinese fir trees of different age groups: (<b>a</b>) young-growth forests; (<b>b</b>) middle-aged forests; (<b>c</b>) near-mature forests; (<b>d</b>) mature forests; (<b>e</b>) vver-mature forests. Taking plots Y-1, H-3, N-1, M-2, and O-1 as examples. The white points represent the registered UAV-LiDAR data, and the color-rendered points represent the LiDAR-SLAM data. The white frame show the specific positions of the three slice angles of the local field of view.</p>
Full article ">Figure 9 Cont.
<p>The registration effects of three algorithms on individual Chinese fir trees of different age groups: (<b>a</b>) young-growth forests; (<b>b</b>) middle-aged forests; (<b>c</b>) near-mature forests; (<b>d</b>) mature forests; (<b>e</b>) vver-mature forests. Taking plots Y-1, H-3, N-1, M-2, and O-1 as examples. The white points represent the registered UAV-LiDAR data, and the color-rendered points represent the LiDAR-SLAM data. The white frame show the specific positions of the three slice angles of the local field of view.</p>
Full article ">Figure 10
<p>Differential analysis of individual tree crown delineation apex detection based on three registration algorithms. (<b>a</b>) NDT-ICP algorithms; (<b>b</b>) NDT-GICP algorithms; (<b>c</b>) NDT-VGICP algorithms.</p>
Full article ">Figure 10 Cont.
<p>Differential analysis of individual tree crown delineation apex detection based on three registration algorithms. (<b>a</b>) NDT-ICP algorithms; (<b>b</b>) NDT-GICP algorithms; (<b>c</b>) NDT-VGICP algorithms.</p>
Full article ">Figure 11
<p>Main effects of age groups and three registration algorithms on the ITCD-F score and tree height RMSE using Tukey’s test. Panels (<b>a</b>,<b>b</b>) show the main effects of age groups and registration algorithms on the ITCD-F score, while panels (<b>c</b>,<b>d</b>) show the main effects on tree height RMSE. In panels (<b>a</b>,<b>c</b>), different colored boxes represent different age groups; in panels (<b>b</b>,<b>d</b>), different colored boxes represent different registration algorithms.</p>
Full article ">Figure 12
<p>Comparison of the optimized registration algorithm and the traditional algorithm across different age groups. (<b>a</b>) NDT-ICP algorithms; (<b>b</b>) NDT-GICP algorithms; (<b>c</b>) NDT-VGICP algorithms. “Y” represents young-growth forests; “H” represents half-mature forests; “N” represents near-mature forests; “M” represents mature forests; and “O” represents over-mature forests. The different colored columns in the figure represent different age groups.</p>
Full article ">Figure 13
<p>Accuracy evaluation of remote sensing-derived tree height at individual tree and stand scales. (<b>a</b>) Fitting results of remote sensing-derived tree height at the individual tree level and field-measured tree height; (<b>b</b>) Fitting results of remote sensing-derived stand average tree height and field-measured average tree height.</p>
Full article ">Figure 14
<p>Accuracy evaluation of remote sensing-derived tree height for different age groups: (<b>a</b>) young-growth forests; (<b>b</b>) middle-aged forests; (<b>c</b>) near-mature forests; (<b>d</b>) mature forests; (<b>e</b>) over-mature forests.</p>
Full article ">Figure 14 Cont.
<p>Accuracy evaluation of remote sensing-derived tree height for different age groups: (<b>a</b>) young-growth forests; (<b>b</b>) middle-aged forests; (<b>c</b>) near-mature forests; (<b>d</b>) mature forests; (<b>e</b>) over-mature forests.</p>
Full article ">
16 pages, 4090 KiB  
Article
Enhancing Chinese Dialogue Generation with Word–Phrase Fusion Embedding and Sparse SoftMax Optimization
by Shenrong Lv, Siyu Lu, Ruiyang Wang, Lirong Yin, Zhengtong Yin, Salman A. AlQahtani, Jiawei Tian and Wenfeng Zheng
Systems 2024, 12(12), 516; https://doi.org/10.3390/systems12120516 - 24 Nov 2024
Viewed by 660
Abstract
Chinese dialogue generation faces multiple challenges, such as semantic understanding, information matching, and response fluency. Generative dialogue systems for Chinese conversation are somehow difficult to construct because of the flexible word order, the great impact of word replacement on semantics, and the complex [...] Read more.
Chinese dialogue generation faces multiple challenges, such as semantic understanding, information matching, and response fluency. Generative dialogue systems for Chinese conversation are somehow difficult to construct because of the flexible word order, the great impact of word replacement on semantics, and the complex implicit context. Existing methods still have limitations in addressing these issues. To tackle these problems, this paper proposes an improved Chinese dialogue generation model based on transformer architecture. The model uses a multi-layer transformer decoder as the backbone and introduces two key techniques, namely incorporating pre-trained language model word embeddings and optimizing the sparse Softmax loss function. For word-embedding fusion, we concatenate the word vectors from the pre-trained model with character-based embeddings to enhance the semantic information of word representations. The sparse Softmax optimization effectively mitigates the overfitting issue by introducing a sparsity regularization term. Experimental results on the Chinese short text conversation (STC) dataset demonstrate that our proposed model significantly outperforms the baseline models on automatic evaluation metrics, such as BLEU and Distinct, with an average improvement of 3.5 percentage points. Human evaluations also validate the superiority of our model in generating fluent and relevant responses. This work provides new insights and solutions for building more intelligent and human-like Chinese dialogue systems. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

Figure 1
<p>Transformer-based generative dialogue system.</p>
Full article ">Figure 2
<p>Workflow of embedding of word–phrase fusion.</p>
Full article ">Figure 3
<p>An example of embedding of word–phrase fusion.</p>
Full article ">Figure 4
<p>Evaluation results. (<b>a</b>) Evaluation results based on recall rate; (<b>b</b>) evaluation results of greedy matching and embedding average.</p>
Full article ">Figure 5
<p>Evaluation results of different parameters k under char word dialog model.</p>
Full article ">Figure 6
<p>Attention-matching heatmap examples. (<b>a</b>) Attention-matching heatmap based on tokenization; (<b>b</b>) attention-matching heatmap based on character–word fusion embedding.</p>
Full article ">
12 pages, 531 KiB  
Article
Adjunctive Therapy with Chinese Herbal Medicine Lowers Risk of Hearing Loss in Type 2 Diabetes Patients: Results from a Cohort-Based Case-Control Study
by Hui-Ju Huang, Hanoch Livneh, Chieh-Tsung Yen, Ming-Chi Lu, Wei-Jen Chen and Tzung-Yi Tsai
Pharmaceuticals 2024, 17(9), 1191; https://doi.org/10.3390/ph17091191 - 10 Sep 2024
Viewed by 935
Abstract
Hearing loss is a frequently observed complication of type 2 diabetes (T2D). Emerging evidence has found that Chinese herbal medicine (CHM) can effectively treat chronic disease; nevertheless, it is unclear if adding CHM to the routine management of T2D would modify sequent risk [...] Read more.
Hearing loss is a frequently observed complication of type 2 diabetes (T2D). Emerging evidence has found that Chinese herbal medicine (CHM) can effectively treat chronic disease; nevertheless, it is unclear if adding CHM to the routine management of T2D would modify sequent risk of hearing loss. This cohort-based case-control study was conducted to address this issue. First, a total of 64,418 subjects aged 20–70 years, diagnosed with T2D between 2002 and 2011, were extracted from a nationwide health claims database. Among them, we identified 4516 cases of hearing loss after T2D by the end of 2013. They were then randomly matched to 9032 controls without hearing loss at a 1:2 ratio. Following conditional logistic regression, we found the addition of CHM to conventional care reduced the risk of developing hearing loss, with an adjusted odds ratio of 0.75 (95% confidence interval: 0.70–0.83). Specifically, taking CHM products for at least two years benefits T2D patients in lowering sequent risk of hearing loss. The findings herein implicated that integrating CHM into conventional care substantially correlated to lower risk of hearing loss for T2D patients, but further basic research is needed to secure the application of finished herbal products. Full article
(This article belongs to the Special Issue Natural Products in Diabetes Mellitus: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Flowchart of subject selection.</p>
Full article ">Figure 2
<p>Hearing loss risk determined by multivariate conditional logistic regression across different herbal products. <span class="html-italic">Y</span>-axis: Chinese herbal medicines; <span class="html-italic">X</span>-axis: odds ratio.</p>
Full article ">
23 pages, 1670 KiB  
Article
Digital Policy, Green Innovation, and Digital-Intelligent Transformation of Companies
by Xin Tan, Jinfang Jiao, Ming Jiang, Ming Chen, Wenpeng Wang and Yijun Sun
Sustainability 2024, 16(16), 6760; https://doi.org/10.3390/su16166760 - 7 Aug 2024
Viewed by 1718
Abstract
In the midst of rigorous market rivalry, enhancing a company’s competitiveness and operational efficiency in an era of rapid IT advancement is a pressing concern for business leaders. The National Big Data Comprehensive Zone (BDCZ) pilot scheme, instituted by the Chinese government, systematically [...] Read more.
In the midst of rigorous market rivalry, enhancing a company’s competitiveness and operational efficiency in an era of rapid IT advancement is a pressing concern for business leaders. The National Big Data Comprehensive Zone (BDCZ) pilot scheme, instituted by the Chinese government, systematically addresses seven core objectives, encompassing data resource management, sharing and disclosure, data center consolidation, application of data resources, and the circulation of data elements. This policy initiative aims to bolster the establishment of information infrastructure through big data applications, facilitate the influx and movement of talent, and propel corporate sustainable growth. Utilizing a quasi-natural experiment approach, we assess the pilot policy’s influence on the digital-intelligent transformation (DIT) of manufacturing companies from a green innovation ecosystem perspective, employing datasets from 2010 to 2022, and methodologies such as Difference-in-Differences (DID), Synthetic Differences-in-Differences (SDID), and Propensity Score Matching-DID (PSM-DID). The findings indicate that the BDCZ initiative significantly fosters DIT in manufacturing companies. The policy’s establishment confers benefits, including access to increased government support and innovation capital, thereby enhancing the sustainability of green innovation efforts. It also strengthens corporate collaboration, engendering synergistic benefits that improve regional economic progression and establish a conducive environment for digital development, ultimately enhancing the regional innovation ecosystem. The pilot policy’s impact varies across entities, with more profound effects observed in developed financial markets compared to underdeveloped ones. Additionally, non-state-owned companies exhibit a greater response to BDCZ policy interventions than their state-owned counterparts. Moreover, manufacturing bussiness with a higher proportion of executive shareholding are more substantially influenced by the BDCZ. This article fills the research gap by using the quasi-natural experiment of BDCZ to test the impact on DIT of companies and provides inspiration for local governments to mobilize the enthusiasm of manufacturing companies for DIT. Full article
Show Figures

Figure 1

Figure 1
<p>Mechanism diagram.</p>
Full article ">Figure 2
<p>Parallel trend hypothesis test.</p>
Full article ">Figure 3
<p>Dynamic trend based on SDID.</p>
Full article ">Figure 4
<p>Placebo test.</p>
Full article ">
14 pages, 7922 KiB  
Article
An Ultra-Thin Multi-Band Logo Antenna for Internet of Vehicles Applications
by Jun Li, Junjie Huang, Hongli He and Yanjie Wang
Electronics 2024, 13(14), 2792; https://doi.org/10.3390/electronics13142792 - 16 Jul 2024
Cited by 1 | Viewed by 1369
Abstract
In this paper, an ultra-thin logo antenna (LGA) operating in multiple frequency bands for Internet of Vehicles (IoVs) applications was proposed. The designed antenna can cover five frequency bands, 0.86–1.01 GHz (16.0%) for LoRa communication, 1.3–1.36 GHz (4.6%) for GPS, 2.32–2.71 GHz (16.3%) [...] Read more.
In this paper, an ultra-thin logo antenna (LGA) operating in multiple frequency bands for Internet of Vehicles (IoVs) applications was proposed. The designed antenna can cover five frequency bands, 0.86–1.01 GHz (16.0%) for LoRa communication, 1.3–1.36 GHz (4.6%) for GPS, 2.32–2.71 GHz (16.3%) for Bluetooth communication, 3.63–3.89 GHz (6.9%) for 5G communication, and 5.27–5.66 GHz (7.1%) for WLAN, as the simulation indicated. The initial antenna started with a modified coplanar waveguide (CPW)-fed circular disk monopole radiator. To create extra current paths and further excite other modes, the disk was hollowed out into the shape of the car logo of the Chinese smart EV brand XPENG composing four rhombic parasitic patches. Next, four triangular parasitic patches were inserted to improve the impedance matching of the band at 5.6 GHz. Finally, four metallic vias were loaded for adjusting resonant points and the return loss reduction. Designed on a flexible substrate, the antenna can easily bend to a certain degree in complex vehicular communication for IoV. The measured results under horizontal and vertical bending showed the LGA can operate in a bending state while maintaining good performance. The proposed LGA addresses the issue of applying one single multi-band antenna to allow vehicles to communicate over several channels, which relieves the need for a sophisticated antenna network. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Geometry of the proposed antenna; (<b>b</b>) logo of the Chinese Smart EV brand XPENG; (<b>c</b>) evolution process of the proposed LGA; (<b>d</b>) simulated return losses of ANT I, II, III, and IV.</p>
Full article ">Figure 2
<p>Configuration of (<b>a</b>) ANT Ref., (<b>b</b>) ANT I, and (<b>c</b>) return losses of both antennas.</p>
Full article ">Figure 3
<p>(<b>a</b>) Simulated return losses and surface current distributions of (<b>b</b>) ANT I and (<b>c</b>) ANT II at 3.8 GHz.</p>
Full article ">Figure 4
<p>(<b>a</b>) Simulated return losses of ANT II, III, and surface current distributions of (<b>b</b>) ANT II and (<b>c</b>) ANT III at 5.6 GHz.</p>
Full article ">Figure 5
<p>Simulated return losses of the proposed LGA (<b>a</b>) with varied number of vias, and (<b>b</b>) radius. (<b>c</b>) Comparison of ANT III and IV.</p>
Full article ">Figure 6
<p>Parameter analysis of the proposed LGA. (<b>a</b>) <span class="html-italic">L</span><sub>3</sub> = 5.5 mm with varied <span class="html-italic">W</span><sub>3</sub>; (<b>b</b>) <span class="html-italic">W</span><sub>3</sub> = 7.5 mm with varied <span class="html-italic">L</span><sub>3</sub>; (<b>c</b>) <span class="html-italic">W</span><sub>3</sub> = 0.75 mm, <span class="html-italic">L</span><sub>3</sub> = 5.5 mm with varied <span class="html-italic">D</span><sub>2</sub>.</p>
Full article ">Figure 7
<p>Simulated radiation pattern on XOY and XOZ plane at each resonant frequency.</p>
Full article ">Figure 8
<p>Simulated surface current distribution on surface at five resonant frequencies.</p>
Full article ">Figure 9
<p>Photograph of the fabricated flexible antenna and under bending around a bottle.</p>
Full article ">Figure 10
<p>(<b>a</b>) A blueprint of the proposed antenna installed on a car in vehicular communication; (<b>b</b>) measured environment of the antenna.</p>
Full article ">Figure 11
<p>Measured and simulated (<b>a</b>) return losses; (<b>b</b>) realized gain of the proposed antenna.</p>
Full article ">Figure 12
<p>Simulated and measured normalized radiation pattern at 0.92, 1.32, 2.4, 3.75, and 5.46 GHz.</p>
Full article ">Figure 13
<p>Measured environment of the antenna under (<b>a</b>) horizontal and (<b>b</b>) vertical bending.</p>
Full article ">Figure 14
<p>Measurement of the |S<sub>11</sub>| under horizontal and vertical bending.</p>
Full article ">
18 pages, 1459 KiB  
Article
Contrastive Learning Penalized Cross-Entropy with Diversity Contrastive Search Decoding for Diagnostic Report Generation of Reduced Token Repetition
by Taozheng Zhang, Jiajian Meng, Yuseng Yang and Shaode Yu
Appl. Sci. 2024, 14(7), 2817; https://doi.org/10.3390/app14072817 - 27 Mar 2024
Cited by 2 | Viewed by 1438
Abstract
Medical imaging description and disease diagnosis are vitally important yet time-consuming. Automated diagnosis report generation (DRG) from medical imaging description can reduce clinicians’ workload and improve their routine efficiency. To address this natural language generation task, fine-tuning a pre-trained large language model (LLM) [...] Read more.
Medical imaging description and disease diagnosis are vitally important yet time-consuming. Automated diagnosis report generation (DRG) from medical imaging description can reduce clinicians’ workload and improve their routine efficiency. To address this natural language generation task, fine-tuning a pre-trained large language model (LLM) is cost-effective and indispensable, and its success has been witnessed in many downstream applications. However, semantic inconsistency of sentence embeddings has been massively observed from undesirable repetitions or unnaturalness in text generation. To address the underlying issue of anisotropic distribution of token representation, in this study, a contrastive learning penalized cross-entropy (CLpCE) objective function is implemented to enhance the semantic consistency and accuracy of token representation by guiding the fine-tuning procedure towards a specific task. Furthermore, to improve the diversity of token generation in text summarization and to prevent sampling from unreliable tail of token distributions, a diversity contrastive search (DCS) decoding method is designed for restricting the report generation derived from a probable candidate set with maintained semantic coherence. Furthermore, a novel metric named the maximum of token repetition ratio (maxTRR) is proposed to estimate the token diversity and to help determine the candidate output. Based on the LLM of a generative pre-trained Transformer 2 (GPT-2) of Chinese version, the proposed CLpCE with DCS (CLpCEwDCS) decoding framework is validated on 30,000 desensitized text samples from the “Medical Imaging Diagnosis Report Generation” track of 2023 Global Artificial Intelligence Technology Innovation Competition. Using four kinds of metrics evaluated from n-gram word matching, semantic relevance, and content similarity as well as the maxTRR metric extensive experiments reveal that the proposed framework effectively maintains semantic coherence and accuracy (BLEU-1, 0.4937; BLEU-2, 0.4107; BLEU-3, 0.3461; BLEU-4, 0.2933; METEOR, 0.2612; ROUGE, 0.5182; CIDER, 1.4339) and improves text generation diversity and naturalness (maxTRR, 0.12). The phenomenon of dull or repetitive text generation is common when fine-tuning pre-trained LLMs for natural language processing applications. This study might shed some light on relieving this issue by developing comprehensive strategies to enhance semantic coherence, accuracy and diversity of sentence embeddings. Full article
Show Figures

Figure 1

Figure 1
<p>The structure of Transformer and GPT-2 decoder blocks.</p>
Full article ">Figure 2
<p>The CLpCE-based model fine-tuning procedure. <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>C</mi> <mi>E</mi> </mrow> </msub> </semantics></math> guides the supervised learning and <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>C</mi> <mi>L</mi> </mrow> </msub> </semantics></math> directs the unsupervised learning, both parts contributing to the fine-tuning of pre-trained LLMs for accurate feature representation towards a specific task.</p>
Full article ">Figure 3
<p>The effect of different <math display="inline"><semantics> <mi>β</mi> </semantics></math> values and decoding methods on DRG text summarization. In the plot, the horizontal axis denotes the <math display="inline"><semantics> <mi>β</mi> </semantics></math> values in the CLpCE objective function, and the vertical axis presents the values of evaluation metrics. Specifically, combinations of different types of lines, markers and colors are used for identifying different metric values of a DRG model (BLEU-1, solid black line with ★; BLEU-2, dashed black line with ∘; BLEU-3, dotted black line with ♢; BLEU-4, dash-dotted black line with □; METEOR, dashed red line with ⊳; ROUGE, dashed green line with △; and CIDER, dashed blue line with ▽).</p>
Full article ">Figure 4
<p>The effect of control threshold <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> on the text generation diversity (<math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, dotted red line with ♢; <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.10</mn> </mrow> </semantics></math>, dashed blue line with ∘).</p>
Full article ">
27 pages, 7605 KiB  
Article
An Experimental Investigation into the Scope Assignment of Japanese and Chinese Quantifier-Negation Sentences
by Yunchuan Chen
Languages 2024, 9(3), 111; https://doi.org/10.3390/languages9030111 - 20 Mar 2024
Viewed by 1467
Abstract
Quantifier-Negation sentences such as all teachers did not use Sandy’s car are known to allow an inverse scope interpretation in English. However, there is a lack of experimental evidence to determine whether this interpretation is allowed in equivalent sentences in Japanese and Chinese. [...] Read more.
Quantifier-Negation sentences such as all teachers did not use Sandy’s car are known to allow an inverse scope interpretation in English. However, there is a lack of experimental evidence to determine whether this interpretation is allowed in equivalent sentences in Japanese and Chinese. To address this issue, this study conducted a sentence–picture matching truth value judgment experiment in both Japanese and Chinese. The data suggested that Japanese Quantifier-Negation sentences do allow inverse scope readings, which suggests that the subject may be interpreted within the scope of negation. In contrast, Chinese Quantifier-Negation sentences prohibit inverse scope readings, which is in accordance with the strong scope rigidity consistently observed in this language. This paper also discussed how to develop a valid experiment for investigating scope ambiguities. Full article
17 pages, 4945 KiB  
Article
Research on Chinese Named Entity Recognition Based on Lexical Information and Spatial Features
by Zhipeng Zhang, Shengquan Liu, Zhaorui Jian and Huixin Yin
Appl. Sci. 2024, 14(6), 2242; https://doi.org/10.3390/app14062242 - 7 Mar 2024
Cited by 1 | Viewed by 1261
Abstract
In the field of Chinese-named entity recognition, recent research has sparked new interest by combining lexical features with character-based methods. Although this vocabulary enhancement method provides a new perspective, it faces two main challenges: firstly, using character-by-character matching can easily lead to conflicts [...] Read more.
In the field of Chinese-named entity recognition, recent research has sparked new interest by combining lexical features with character-based methods. Although this vocabulary enhancement method provides a new perspective, it faces two main challenges: firstly, using character-by-character matching can easily lead to conflicts during the vocabulary matching process. Although existing solutions attempt to alleviate this problem by obtaining semantic information about words, they still lack sufficient temporal sequential or global information acquisition; secondly, due to the limitations of dictionaries, there may be words in a sentence that do not match the dictionary. In this situation, existing vocabulary enhancement methods cannot effectively play a role. To address these issues, this paper proposes a method based on lexical information and spatial features. This method carefully considers the neighborhood and overlap relationships of characters in vocabulary and establishes global bidirectional semantic and temporal sequential information to effectively address the impact of conflicting vocabulary and character fusion on entity segmentation. Secondly, the attention score matrix extracted by the point-by-point convolutional network captures the local spatial relationship between characters without fused vocabulary information and characters with fused vocabulary information, aiming to compensate for information loss and strengthen spatial connections. The comparison results with the baseline model show that the SISF method proposed in this paper improves the F1 metric by 0.72%, 3.12%, 1.07%, and 0.37% on the Resume, Weibo, Ontonotes 4.0, and MSRA datasets, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>Flat- Lattice structure.</p>
Full article ">Figure 2
<p>Overall model.</p>
Full article ">Figure 3
<p>Bi-LSTM obtains lexical semantic information.</p>
Full article ">Figure 4
<p>Character-vocabulary encoding model.</p>
Full article ">Figure 5
<p>From (<b>a</b>,<b>b</b>), it is observed that when there is a conflict between words, the global bidirectional semantic and sequential temporal information of the matched words is obtained, and the weights of the conflicting matched words are adjusted step by step to alleviate the conflict between the matched words effectively. (<b>a</b>) Obtaining semantic and sequential temporal information. (<b>b</b>) Semantic and sequential temporal information not captured.</p>
Full article ">Figure 6
<p>Attentional visualization: (<b>a</b>) represents the visualization of the score matrix of characters with fused vocabulary information versus those with unfused lexical information after local attention, and (<b>b</b>) represents the visualization of the score matrix of characters with fused lexical information versus those with unfused lexical information without local attention. (<b>a</b>) Through local attention. (<b>b</b>) Without local attention.</p>
Full article ">
15 pages, 846 KiB  
Article
Holistic Spatial Reasoning for Chinese Spatial Language Understanding
by Yu Zhao and Jianguo Wei
Appl. Sci. 2023, 13(21), 11712; https://doi.org/10.3390/app132111712 - 26 Oct 2023
Viewed by 1359
Abstract
Spatial language understanding (SLU) is an important task in the field of information extraction, and it involves complex spatial analyses and reasoning processes. Unlike English SLU, in the case of the Chinese language, there may be some language-specific challenges, such as the phenomenon [...] Read more.
Spatial language understanding (SLU) is an important task in the field of information extraction, and it involves complex spatial analyses and reasoning processes. Unlike English SLU, in the case of the Chinese language, there may be some language-specific challenges, such as the phenomenon of polysemia and the substitution of synonyms. In this work, we explore Chinese SLU by taking advantage of large language models. Inspired by recent chain-of-thought (CoT) strategies, in this study, we propose the Spatial-CoT template to help improve LLMs’ reasoning abilities in order to deal with the challenges of Chinese SLU. Spatial-CoT offers LLMs three steps of instructions from different perspectives, namely, entity extraction, context analysis, and common knowledge analysis. We evaluate our framework on the Chinese SLU dataset SpaCE, which contains three subtasks: abnormal spatial semantics recognition, spatial role labeling, and spatial scene matching. The experimental results show that our Spatial-CoT outperforms vanilla prompt learning on ChatGPT and achieves competitive performance in comparison with traditional supervised models. Further analysis revealed that our method could address the phenomenon of polysemia and the substitution of synonyms in Chinese spatial language understanding. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The cases of the phenomenon of polysemia (<b>a</b>) and the substitution of synonyms (<b>b</b>) in the Chinese language.</p>
Full article ">Figure 2
<p>The overall framework of our chain-of-thought spatial reasoning.</p>
Full article ">Figure 3
<p>Few-shot performance on the SApSR and SpRL tasks. The x-axis denotes the number of samples. The y-axis denotes the task performance.</p>
Full article ">Figure 4
<p>Comparison among the prompt templates that we explored.</p>
Full article ">Figure 5
<p>The cases of the vanilla prompt and Spatial-CoT method for ASpSR (<b>left</b>) and SpRL (<b>right</b>).</p>
Full article ">
19 pages, 4885 KiB  
Article
SLNER: Chinese Few-Shot Named Entity Recognition with Enhanced Span and Label Semantics
by Zhe Ren, Xizhong Qin and Wensheng Ran
Appl. Sci. 2023, 13(15), 8609; https://doi.org/10.3390/app13158609 - 26 Jul 2023
Cited by 2 | Viewed by 1668
Abstract
Few-shot named entity recognition requires sufficient prior knowledge to transfer valuable knowledge to the target domain with only a few labeled examples. Existing Chinese few-shot named entity recognition methods suffer from inadequate prior knowledge and limitations in feature representation. In this paper, we [...] Read more.
Few-shot named entity recognition requires sufficient prior knowledge to transfer valuable knowledge to the target domain with only a few labeled examples. Existing Chinese few-shot named entity recognition methods suffer from inadequate prior knowledge and limitations in feature representation. In this paper, we utilize enhanced Span and Label semantic representations for Chinese few-shot Named Entity Recognition (SLNER) to address the problem. Specifically, SLNER utilizes two encoders. One encoder is used to encode the text and its spans, and we employ the biaffine attention mechanism and self-attention to obtain enhanced span representations. This approach fully leverages the internal composition of entity mentions, leading to more accurate feature representations. The other encoder encodes the full label names to obtain label representations. Label names are broad representations of specific entity categories and share similar semantic meanings with entities. This similarity allows label names to offer valuable prior knowledge in few-shot scenarios. Finally, our model learns to match span representations with label representations. We conducted extensive experiments on three sampling benchmark Chinese datasets and a self-built food safety risk domain dataset. The experimental results show that our model outperforms the F1 scores of 0.20–6.57% of previous state-of-the-art methods in few-shot settings. Full article
Show Figures

Figure 1

Figure 1
<p>Example of an NER task. The entities to be recognized are highlighted within dashed boxes, and different colors represent different entity types.</p>
Full article ">Figure 2
<p>Traditional sequence-labeling NER method.</p>
Full article ">Figure 3
<p>Span-based NER method.</p>
Full article ">Figure 4
<p>Different entity labels. (<b>a</b>) The label ‘Location’ and its associated entities. (<b>b</b>) The label ‘Organization’ and its associated entities.</p>
Full article ">Figure 5
<p>The overall structure of SLNER. The grey module on the left learns span representations, while the grey module on the right learns label representations. The model’s final predictions are calculated through distance matching. <math display="inline"><semantics><mrow><msub><mi>N</mi><mi>s</mi></msub></mrow></semantics></math> represents the number of spans, and <math display="inline"><semantics><mrow><msub><mi>N</mi><mi>c</mi></msub></mrow></semantics></math> represents the number of entity categories.</p>
Full article ">Figure 6
<p>Different ways of span representation. (<b>a</b>) Simple start- and end-token concatenation as a span representation. (<b>b</b>) Span representation using the biaffine decoder method, which allows for information interaction between the start and end tokens (indicated by blue arrows). (<b>c</b>) Span representation method used in our model, which incorporates the interaction between the tokens within the span (indicated by yellow arrows).</p>
Full article ">Figure 7
<p>Enhanced span representation.</p>
Full article ">Figure 8
<p>Hierarchy chart of the RISK dataset.</p>
Full article ">Figure 9
<p>Statistical chart of the number of various entity types in the RISK dataset.</p>
Full article ">Figure 10
<p>Different definitions of label names.</p>
Full article ">
21 pages, 598 KiB  
Article
Enhancing Chinese Address Parsing in Low-Resource Scenarios through In-Context Learning
by Guangming Ling, Xiaofeng Mu, Chao Wang and Aiping Xu
ISPRS Int. J. Geo-Inf. 2023, 12(7), 296; https://doi.org/10.3390/ijgi12070296 - 22 Jul 2023
Cited by 2 | Viewed by 1828
Abstract
Address parsing is a crucial task in natural language processing, particularly for Chinese addresses. The complex structure and semantic features of Chinese addresses present challenges due to their inherent ambiguity. Additionally, different task scenarios require varying levels of granularity in address components, further [...] Read more.
Address parsing is a crucial task in natural language processing, particularly for Chinese addresses. The complex structure and semantic features of Chinese addresses present challenges due to their inherent ambiguity. Additionally, different task scenarios require varying levels of granularity in address components, further complicating the parsing process. To address these challenges and adapt to low-resource environments, we propose CapICL, a novel Chinese address parsing model based on the In-Context Learning (ICL) framework. CapICL leverages a sequence generator, regular expression matching, BERT semantic similarity computation, and Generative Pre-trained Transformer (GPT) modeling to enhance parsing accuracy by incorporating contextual information. We construct the sequence generator using a small annotated dataset, capturing distribution patterns and boundary features of address types to model address structure and semantics, which mitigates interference from unnecessary variations. We introduce the REB–KNN algorithm, which selects similar samples for ICL-based parsing using regular expression matching and BERT semantic similarity computation. The selected samples, raw text, and explanatory text are combined to form prompts and inputted into the GPT model for prediction and address parsing. Experimental results demonstrate significant achievements of CapICL in low-resource environments, reducing dependency on annotated data and computational resources. Our model’s effectiveness, adaptability, and broad application potential are validated, showcasing its positive impact in natural language processing and geographical information systems. Full article
Show Figures

Figure 1

Figure 1
<p>Model overview. The generator takes the raw text as input and generates a regular expression sequence and a label word sequence. Subsequently, the REB–KNN algorithm selects a similar annotated sample based on the regular expression sequence and label word sequence. Then, the prompt generator constructs prompts corresponding to the selected sample. The generated prompts are fed into the Generative Pre-trained Transformer (GPT) model for prediction, and the address parsing result is extracted from the model’s output.</p>
Full article ">Figure 2
<p>CapICL architecture. The sequence generator captures the distribution patterns and boundary features of address types in the raw text, generating regular expression and label word sequences. The REB–KNN algorithm performs regular expression matching and BERT-based semantic similarity computation on these two sequences, selecting annotated samples that are similar to the query (raw). The specific structure of the Prompt template is illustrated in the top right corner of the figure, while the instruction provides a brief description of the dataset.</p>
Full article ">Figure 3
<p>Trie data structure for representing INF (detailed information such as floor and room numbers) address components. The trie is constructed by reversing the address segmentation and captures the specific characteristics of the addressed entities. The root node provides overall information about the corresponding type, including the capacity (maximum number of sub-tree root nodes, e.g., “building”), the threshold (minimum score for selecting information when constructing regular expressions), and the maximum and minimum lengths defining the length range of entities for the current type. The sub-trees below the root node represent the segmented parts of the address, while all nodes follow the same structure, describing the features of the represented string (e.g., “building number”). Additionally, each node maintains a left/right 1-neighboring list, which contains the neighboring strings. The overall score represents the normalized frequency of the string occurrence in the dataset.</p>
Full article ">Figure 4
<p>Directed Acyclic Graph (DAG) representing Chinese address component types. Nodes represent types of address components, and edges indicate the transition probability between types. The weight <span class="html-italic">W</span> of an edge represents the likelihood of transitioning from the current node to the next. The subscript of <span class="html-italic">W</span> is formed by connecting types with “-”. If there is a previous node, it is included. The symbols <span>$</span> and # represent the beginning and end of named entities, respectively.</p>
Full article ">Figure 5
<p>Illustration of the Arbitrary Granularity Segmentation Process for Raw Text. Firstly, the address component type and transition probabilities are obtained from the directed acyclic graph (DAG) based on the current state. At the same time, length information is retrieved from the trie. Next, the corresponding regular expression set (RES) is obtained from SORES and matched against the text. Then, an 8-dimensional vector is constructed by combining the type information, transition probabilities, length information, matched start and end positions, regular expression scores, match length, and numerical indicators. This vector is used as input to the binary classifier. By predicting the positive label, the validity of the segmentation is determined. Finally, the segmentation result is obtained.</p>
Full article ">Figure 6
<p>Impact of K on model performance. Other experimental settings remain the same as in <a href="#ijgi-12-00296-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 7
<p>Comparison of model performance based on randomly sampled annotated datasets of different sizes. Each sample size was randomly sampled six times, and independent model evaluations were performed for each annotated dataset.</p>
Full article ">
24 pages, 20309 KiB  
Article
Does the Rational Function Model’s Accuracy for GF1 and GF6 WFV Images Satisfy Practical Requirements?
by Xiaojun Shan and Jingyi Zhang
Remote Sens. 2023, 15(11), 2820; https://doi.org/10.3390/rs15112820 - 29 May 2023
Cited by 2 | Viewed by 1714
Abstract
The Gaofen-1 (GF-1) and Gaofen-6 (GF-6) satellites have acquired many GF-1 and GF-6 wide-field-view (WFV) images. These images have been made available for free use globally. The GF-1 WFV (GF-1) and GF-6 WFV (GF-6) images have rational polynomial coefficients (RPCs). In practical applications, [...] Read more.
The Gaofen-1 (GF-1) and Gaofen-6 (GF-6) satellites have acquired many GF-1 and GF-6 wide-field-view (WFV) images. These images have been made available for free use globally. The GF-1 WFV (GF-1) and GF-6 WFV (GF-6) images have rational polynomial coefficients (RPCs). In practical applications, RPC corrections of GF-1 and GF-6 images need to be completed using the rational function model (RFM). However, can the accuracy of the rational function model satisfy practical application requirements? To address this issue, a geometric accuracy method was proposed in this paper to evaluate the accuracy of the RFM of GF-1 and GF-6 images. First, RPC corrections were completed using the RFM and refined RFM, respectively. The RFM was constructed using the RPCs and Shuttle Radar Topography Mission (SRTM) 90 m DEM. The RFM was refined via affine transformation based on control points (CPs), which resulted in a refined RFM. Then, an automatic matching method was proposed to complete the automatic matching of GF-1/GF-6 images and reference images, which enabled us to obtain many uniformly distributed CPs. Finally, these CPs were used to evaluate the geometric accuracy of the RFM and refined RFM. The 14th-layer Google images of the corresponding area were used as reference images. In the experiments, the advantages and disadvantages of BRIEF, SIFT, and the proposed method were first compared. Then, the values of the root mean square error (RSME) of 10,561 Chinese, French, and Brazilian GF-1 and GF-6 images were calculated and statistically analyzed, and the local geometric distortions of the GF-1 and GF-6 images were evaluated; these were used to evaluate the accuracy of the RFM. Last, the accuracy of the refined RFM was evaluated using the eight GF-1 and GF-6 images. The experimental results indicate that the accuracy of the RFM for most GF-1 and GF-6 images cannot meet the actual use requirement of being better than 1.0 pixel, the accuracy of the refined RFM for GF-1 images cannot meet practical requirement of being better than 1.0 pixel, and the accuracy of the refined RFM for most GF-6 images meets the practical requirement of being better than 1.0 pixel. However, the RMSE values that meet the requirements are between 0.9 and 1.0, and the geometric accuracy can be further improved. Full article
(This article belongs to the Special Issue Gaofen 16m Analysis Ready Data)
Show Figures

Figure 1

Figure 1
<p>Workflow of proposed method.</p>
Full article ">Figure 2
<p>Workflow of automatic matching method.</p>
Full article ">Figure 3
<p>Histogram of number of CPs for different methods.</p>
Full article ">Figure 4
<p>The distribution of CPs of different methods for No. 1 image. (<b>a</b>) BRIEF. (<b>b</b>) SIFT. (<b>c</b>) Proposed method.</p>
Full article ">Figure 5
<p>The distribution of CPs of different methods for No. 2 image. (<b>a</b>) BRIEF. (<b>b</b>) SIFT. (<b>c</b>) Proposed method.</p>
Full article ">Figure 6
<p>The distribution of CPs of different methods for No. 3 image. (<b>a</b>) BRIEF. (<b>b</b>) SIFT. (<b>c</b>) Proposed method.</p>
Full article ">Figure 7
<p>The distribution of CPs of different methods for No. 4 image. (<b>a</b>) BRIEF. (<b>b</b>) SIFT. (<b>c</b>) Proposed method.</p>
Full article ">Figure 8
<p>Histogram of processing times for different methods.</p>
Full article ">Figure 9
<p>The RMSE values obtained from the different methods used for the four experimental images. (<b>a</b>) No. 1 image. (<b>b</b>) No. 2 image. (<b>c</b>) No. 3 image. (<b>d</b>) No. 4 image.</p>
Full article ">Figure 10
<p>RMSE histogram of GF-1 images of China.</p>
Full article ">Figure 11
<p>RMSE histogram of GF-1 images of Brazil.</p>
Full article ">Figure 12
<p>RMSE histogram of GF-1 images of France.</p>
Full article ">Figure 13
<p>RMSE histogram of GF-6 images of China.</p>
Full article ">Figure 14
<p>RMSE histogram of GF-6 images of Brazil.</p>
Full article ">Figure 15
<p>RMSE histogram of GF-6 images of France.</p>
Full article ">Figure 16
<p>Analysis results of local geometric distortions in No. 1 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 17
<p>Analysis results of local geometric distortions in No. 2 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 18
<p>Analysis results of local geometric distortions in No. 3 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 19
<p>Analysis results of local geometric distortions in No. 4 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 20
<p>Analysis results of local geometric distortions in No. 5 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 21
<p>Analysis results of local geometric distortions in No. 6 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 22
<p>Analysis results of local geometric distortions in No. 7 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 23
<p>Analysis results of local geometric distortions in No. 8 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 24
<p>Analysis results of local geometric distortions in No. 1 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 25
<p>Analysis results of local geometric distortions in No. 2 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 26
<p>Analysis results of local geometric distortions in No. 3 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 27
<p>Analysis results of local geometric distortions in No. 4 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 28
<p>Analysis results of local geometric distortions in No. 5 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 29
<p>Analysis results of local geometric distortions in No. 6 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 30
<p>Analysis results of local geometric distortions in No. 7 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">Figure 31
<p>Analysis results of local geometric distortions in No. 8 image: (<b>a</b>) plot of geometric errors, (<b>b</b>) histogram of geometric error values.</p>
Full article ">
Back to TopTop