[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (35)

Search Parameters:
Keywords = MSERs

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 23384 KiB  
Article
A Hybrid Approach for Image Acquisition Methods Based on Feature-Based Image Registration
by Anchal Kumawat, Sucheta Panda, Vassilis C. Gerogiannis, Andreas Kanavos, Biswaranjan Acharya and Stella Manika
J. Imaging 2024, 10(9), 228; https://doi.org/10.3390/jimaging10090228 - 14 Sep 2024
Viewed by 1295
Abstract
This paper presents a novel hybrid approach to feature detection designed specifically for enhancing Feature-Based Image Registration (FBIR). Through an extensive evaluation involving state-of-the-art feature detectors such as BRISK, FAST, ORB, Harris, MinEigen, and MSER, the proposed hybrid detector demonstrates superior performance in [...] Read more.
This paper presents a novel hybrid approach to feature detection designed specifically for enhancing Feature-Based Image Registration (FBIR). Through an extensive evaluation involving state-of-the-art feature detectors such as BRISK, FAST, ORB, Harris, MinEigen, and MSER, the proposed hybrid detector demonstrates superior performance in terms of keypoint detection accuracy and computational efficiency. Three image acquisition methods (i.e., rotation, scene-to-model, and scaling transformations) are considered in the comparison. Applied across a diverse set of remote-sensing images, the proposed hybrid approach has shown marked improvements in match points and match rates, proving its effectiveness in handling varied and complex imaging conditions typical in satellite and aerial imagery. The experimental results have consistently indicated that the hybrid detector outperforms conventional methods, establishing it as a valuable tool for advanced image registration tasks. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of the proposed methodology.</p>
Full article ">Figure 2
<p>Diagonal approach for hybrid feature-detection method.</p>
Full article ">Figure 3
<p>Sampled color images from AID database [<a href="#B46-jimaging-10-00228" class="html-bibr">46</a>].</p>
Full article ">Figure 4
<p>Grayscale conversion of sampled color images from AID database [<a href="#B46-jimaging-10-00228" class="html-bibr">46</a>].</p>
Full article ">Figure 5
<p>Various rotation angles applied on park and railway station grayscale aerial images.</p>
Full article ">Figure 6
<p>Scaling transformations applied to VSSUT gate and Hirakud dam images. (<b>a</b>) 0.7 scaling factor on VSSUT gate. (<b>b</b>) 0.7 scaling factor on Hirakud dam. (<b>c</b>) 2.0 scaling factor on VSSUT gate. (<b>d</b>) 2.0 scaling factor on Hirakud dam.</p>
Full article ">Figure 7
<p>Detection of feature keypoints in the park image under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation, showcasing the performance of different detectors. Green markers highlight the keypoints detected, with each subfigure corresponding to the output using a different feature-detection method.</p>
Full article ">Figure 8
<p>Detection of feature keypoints in the railway station image under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation, showcasing the performance of different detectors. Green markers indicate the keypoints, and each subfigure corresponds to the output using a different feature-detection method.</p>
Full article ">Figure 9
<p>Extraction of feature keypoints from the park image under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation. Green markers demonstrate the keypoints extracted, emphasizing the nuances of each algorithm with each subfigure showing results using a different feature extraction method.</p>
Full article ">Figure 10
<p>Extraction of feature keypoints from the railway station image under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation. Each subfigure demonstrates the results using a different feature extraction method, with green markers used to emphasize keypoint locations and algorithmic nuances.</p>
Full article ">Figure 11
<p>Matching of feature keypoints in the park image across different rotational views under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation. Subfigures (<b>a</b>–<b>f</b>) display the matched keypoints separately to illustrate individual detector performance clearly. Subfigure (<b>g</b>) shows an overlaid result of the hybrid detector to demonstrate the integration of multiple detection outcomes, providing a comprehensive view of the keypoints matched by the proposed method. Each image aims to highlight the effectiveness of each feature detector in achieving consistent matching across transformations.</p>
Full article ">Figure 12
<p>Matching of feature keypoints in the railway station image across different rotational views under <math display="inline"><semantics> <msup> <mn>150</mn> <mo>∘</mo> </msup> </semantics></math> rotation. Each subfigure highlights the effectiveness of each feature detector in achieving consistent matching.</p>
Full article ">Figure 13
<p>Sequential presentation of detection, extraction, and matching phases for various feature detectors on two sets of airport aerial images. Each row represents a different detector and showcases the process from detection to matching.</p>
Full article ">Figure 14
<p>Sequential presentation of detection, extraction, and matching phases for various feature detectors on two sets of bridge aerial images. Each row represents a different detector and showcases the process from detection to matching.</p>
Full article ">Figure 15
<p>Comparison of feature-detection performance using MSER, BRISK, and Hybrid detectors on two different images under scaling transformations. Each row demonstrates the response of the detectors at scaling factors of the original, 0.7, and 2.0, highlighting the adaptability of these algorithms to changes in image scale. (<b>a</b>) MSER: VSSUT Gate Image. (<b>b</b>) BRISK: VSSUT Gate Image. (<b>c</b>) MSER: HD Image. (<b>d</b>) BRISK: HD Image. (<b>e</b>) Hybrid: VSSUT Gate Image. (<b>f</b>) Hybrid: HD Image. (<b>g</b>) MSER: VSSUT Gate Image, Scale 0.7. (<b>h</b>) BRISK: VSSUT Gate Image, Scale 0.7. (<b>i</b>) MSER: HD Image, Scale 0.7. (<b>j</b>) BRISK: HD Image, Scale 0.7. (<b>k</b>) Hybrid: VSSUT Gate Image, Scale 0.7. (<b>l</b>) Hybrid: HD Image, Scale 0.7. (<b>m</b>) MSER: VSSUT Gate Image, Scale 2.0. (<b>n</b>) BRISK: VSSUT Gate Image, Scale 2.0. (<b>o</b>) MSER: HD Image, Scale 2.0. (<b>p</b>) BRISK: HD Image, Scale 2.0. (<b>q</b>) Hybrid: VSSUT Gate Image, Scale 2.0. (<b>r</b>) Hybrid: HD Image, Scale 2.0.</p>
Full article ">Figure 16
<p>Extraction of feature keypoints using various extractors based on scaling factors of <math display="inline"><semantics> <mrow> <mn>0.7</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>2.0</mn> </mrow> </semantics></math>. Each row demonstrates the impact of scaling on the effectiveness of feature extraction across different images and detectors. (<b>a</b>) MSER: VSSUT Gate Image. (<b>b</b>) BRISK: VSSUT Gate Image. (<b>c</b>) MSER: HD Image. (<b>d</b>) BRISK: HD Image. (<b>e</b>) Hybrid: VSSUT Gate Image. (<b>f</b>) Hybrid: HD Image. (<b>g</b>) MSER: VSSUT Gate Image, Scale 0.7. (<b>h</b>) BRISK: VSSUT Gate Image, Scale 0.7. (<b>i</b>) MSER: HD Image, Scale 0.7. (<b>j</b>) BRISK: HD Image, Scale 0.7. (<b>k</b>) Hybrid: VSSUT Gate Image, Scale 0.7. (<b>l</b>) Hybrid: HD Image, Scale 0.7. (<b>m</b>) MSER: VSSUT Gate Image, Scale 2.0. (<b>n</b>) BRISK: VSSUT Gate Image, Scale 2.0. (<b>o</b>) MSER: HD Image, Scale 2.0. (<b>p</b>) BRISK: HD Image, Scale 2.0. (<b>q</b>) Hybrid: VSSUT Gate Image, Scale 2.0. (<b>r</b>) Hybrid: HD Image, Scale 2.0.</p>
Full article ">Figure 17
<p>Matching of feature keypoints using various detectors on VSSUT gate and Hirakud dam images under two scaling factors, 0.7 and 2.0. Each image series demonstrates the effect of scaling on feature matching performance. (<b>a</b>) MSER: VSSUT Gate Image, Scale 0.7. (<b>b</b>) BRISK: VSSUT Gate Image, Scale 0.7. (<b>c</b>) MSER: Hirakud Dam Image, Scale 0.7. (<b>d</b>) BRISK: Hirakud Dam Image, Scale 0.7. (<b>e</b>) Hybrid: VSSUT Gate Image, Scale 0.7. (<b>f</b>) Hybrid: Hirakud Dam Image, Scale 0.7. (<b>g</b>) MSER: VSSUT Gate Image, Scale 2.0. (<b>h</b>) BRISK: VSSUT Gate Image, Scale 2.0. (<b>i</b>) MSER: Hirakud Dam Image, Scale 2.0. (<b>j</b>) BRISK: Hirakud Dam Image, Scale 2.0. (<b>k</b>) Hybrid: VSSUT Gate Image, Scale 2.0. (<b>l</b>) Hybrid: Hirakud Dam Image, Scale 2.0.</p>
Full article ">Figure 18
<p>Registered images of different scenes using the Hybrid feature detector. Each subfigure shows a different aerial or scene image, highlighting the detailed synthesis achieved through the registration process.</p>
Full article ">Figure 19
<p>Performance comparison of various feature detectors on park scene images. Each subplot visually demonstrates how each feature detector identifies keypoints within the same environmental setting. This provides insights into the adaptability and precision of each method under similar conditions, highlighting their strengths and limitations in detecting significant image features effectively.</p>
Full article ">
26 pages, 11126 KiB  
Article
Infrared Bilateral Polarity Ship Detection in Complex Maritime Scenarios
by Dongming Lu, Longyin Teng, Jiangyun Tan, Mengke Wang, Zechen Tian and Guihua Wang
Sensors 2024, 24(15), 4906; https://doi.org/10.3390/s24154906 - 29 Jul 2024
Viewed by 783
Abstract
In complex maritime scenarios where the grayscale polarity of ships is unknown, existing infrared ship detection methods may struggle to accurately detect ships among significant interference. To address this issue, this paper first proposes an infrared image smoothing method composed of Grayscale Morphological [...] Read more.
In complex maritime scenarios where the grayscale polarity of ships is unknown, existing infrared ship detection methods may struggle to accurately detect ships among significant interference. To address this issue, this paper first proposes an infrared image smoothing method composed of Grayscale Morphological Reconstruction (GMR) and a Relative Total Variation (RTV). Additionally, a detection method considering the grayscale uniformity of ships and integrating shape and spatiotemporal features is established for detecting bright and dark ships in complex maritime scenarios. Initially, the input infrared images undergo opening (closing)-based GMR to preserve dark (bright) blobs with the opposite suppressed, followed by smoothing the image with the relative total variation model to reduce clutter and enhance the contrast of the ship. Subsequently, Maximally Stable Extremal Regions (MSER) are extracted from the smoothed image as candidate targets, and the results from the bright and dark channels are merged. Shape features are then utilized to eliminate clutter interference, yielding single-frame detection results. Finally, leveraging the stability of ships and the fluctuation of clutter, true targets are preserved through a multi-frame matching strategy. Experimental results demonstrate that the proposed method outperforms ITDBE, MRMF, and TFMSER in seven image sequences, achieving accurate and effective detection of both bright and dark polarity ship targets. Full article
(This article belongs to the Special Issue Advanced Sensing Technologies for Marine Intelligent Systems)
Show Figures

Figure 1

Figure 1
<p>The framework of the proposed method.</p>
Full article ">Figure 2
<p>The input images and the results of <span class="html-italic">OGMR</span>(<span class="html-italic">CGMR</span>). (<b>a1</b>–<b>a3</b>) and (<b>b1</b>–<b>b3</b>) are the input dark target image, the result of <span class="html-italic">OGMR</span> processing, <span class="html-italic">CGMR</span> processing, and the corresponding three-dimensional view of gray distribution, respectively. (<b>c1</b>–<b>c3</b>) and (<b>d1</b>–<b>d3</b>) are the input bright target image, <span class="html-italic">CGMR</span> processed images, <span class="html-italic">OGMR</span> processed images, and the corresponding 3D views of gray distribution, respectively.</p>
Full article ">Figure 2 Cont.
<p>The input images and the results of <span class="html-italic">OGMR</span>(<span class="html-italic">CGMR</span>). (<b>a1</b>–<b>a3</b>) and (<b>b1</b>–<b>b3</b>) are the input dark target image, the result of <span class="html-italic">OGMR</span> processing, <span class="html-italic">CGMR</span> processing, and the corresponding three-dimensional view of gray distribution, respectively. (<b>c1</b>–<b>c3</b>) and (<b>d1</b>–<b>d3</b>) are the input bright target image, <span class="html-italic">CGMR</span> processed images, <span class="html-italic">OGMR</span> processed images, and the corresponding 3D views of gray distribution, respectively.</p>
Full article ">Figure 3
<p>Taking the dark target image as an example, (<b>a1</b>–<b>a3</b>) and (<b>b1</b>–<b>b3</b>) are the results of windowed total variation, windowed inherent variation, and inverse relative total variation of the original infrared image (<a href="#sensors-24-04906-f002" class="html-fig">Figure 2</a>(a1)) and the result of <span class="html-italic">OGMR</span> processing, respectively.</p>
Full article ">Figure 4
<p>The smoothed results of the image with a dark ship (<a href="#sensors-24-04906-f002" class="html-fig">Figure 2</a>(a2)) and the image with a bright ship (<a href="#sensors-24-04906-f002" class="html-fig">Figure 2</a>(c2)) with RTV. (<b>a1</b>,<b>a2</b>) are the grayscale distribution and the three-dimensional view of the dark ship image, respectively. (<b>b1</b>,<b>b2</b>) are the grayscale distribution and the three-dimensional view of the bright ship image, respectively.</p>
Full article ">Figure 5
<p>The smoothed results of images in different scenarios. (<b>a</b>) consists of the original input infrared images. (<b>b</b>,<b>c</b>) are the smoothed results of <span class="html-italic">OGMR</span> and <span class="html-italic">CGMR</span>, respectively.</p>
Full article ">Figure 6
<p>Grayscale distribution of dark (light) ships and surroundings before and after smoothing. (<b>a</b>,<b>c</b>) are the grayscale distribution and corresponding 3D views of the dark ship and local background in the original infrared image and smoothed result by <span class="html-italic">OGMR</span> and RTV, respectively. (<b>b</b>,<b>d</b>) are the grayscale distribution and corresponding 3D views of the bright ship and local background in the original infrared image and smoothed result by <span class="html-italic">CGMR</span> and RTV, respectively.</p>
Full article ">Figure 7
<p>The results of the MSER extracted from the infrared image after smoothing and the results of the screening by shape features, with the step size Δ = 3.5 in the dark ship image and the step Δ = 8 in the bright ship image. (<b>a1</b>,<b>a2</b>) and (<b>b1</b>,<b>b2</b>) are the results of the dark ship image processed and smoothed by <span class="html-italic">OGMR</span> (<span class="html-italic">CGMR</span>) and the extracted candidate targets, respectively. (<b>c1</b>,<b>c2</b>) and (<b>d1</b>,<b>d2</b>) are the candidate targets extracted after <span class="html-italic">CGMR</span> (<span class="html-italic">OGMR</span>) processing and smoothing, respectively. (<b>e</b>,<b>f</b>) are the results of the merging of two channels of dark ship image and bright ship image, respectively.</p>
Full article ">Figure 8
<p>The framework of multi-frame matching.</p>
Full article ">Figure 9
<p>Detection results of different methods. (<b>a</b>) The original images of Seq1–Seq9. (<b>b</b>) The detection results of the ITDBE. (<b>c</b>) The detection results of the MRMF. (<b>d</b>) The detection results of the TFMSER. (<b>e</b>) The single-frame detection results of the proposed method. (<b>f</b>) The multi-frame matching results of the proposed method. The red rectangles represent the positions of targets, and the yellow rectangles represent the false alarm.</p>
Full article ">Figure 10
<p>Multi-frame matching result. (<b>a</b>) The first frame of each set of sequences. (<b>a1</b>–<b>a12</b>) are the results of multi-frame matching of the 25th, 50th, …, 25 × <span class="html-italic">i</span>th, …, and 300th (<span class="html-italic">i</span> = 1, 2, ... , 12) frames of the seven sequences, respectively. (<b>a13</b>) The last frame of each set of sequences.</p>
Full article ">Figure 10 Cont.
<p>Multi-frame matching result. (<b>a</b>) The first frame of each set of sequences. (<b>a1</b>–<b>a12</b>) are the results of multi-frame matching of the 25th, 50th, …, 25 × <span class="html-italic">i</span>th, …, and 300th (<span class="html-italic">i</span> = 1, 2, ... , 12) frames of the seven sequences, respectively. (<b>a13</b>) The last frame of each set of sequences.</p>
Full article ">Figure 11
<p>The ROC curves of 7 sequences, with IOUs ranging from 0.1 to 1. (<b>a1</b>–<b>g1</b>) and (<b>a2</b>–<b>g2</b>) are the curves of <span class="html-italic">D<sub>p</sub></span> and <span class="html-italic">FAR</span> for selected methods in 7 sequences, respectively. The green triangle, blue triangle, yellow “x”, and red dots in the figures denote the results of ITDBE, MRMF, TFMSER, and the proposed single-frame detection method, respectively.</p>
Full article ">Figure 11 Cont.
<p>The ROC curves of 7 sequences, with IOUs ranging from 0.1 to 1. (<b>a1</b>–<b>g1</b>) and (<b>a2</b>–<b>g2</b>) are the curves of <span class="html-italic">D<sub>p</sub></span> and <span class="html-italic">FAR</span> for selected methods in 7 sequences, respectively. The green triangle, blue triangle, yellow “x”, and red dots in the figures denote the results of ITDBE, MRMF, TFMSER, and the proposed single-frame detection method, respectively.</p>
Full article ">Figure 11 Cont.
<p>The ROC curves of 7 sequences, with IOUs ranging from 0.1 to 1. (<b>a1</b>–<b>g1</b>) and (<b>a2</b>–<b>g2</b>) are the curves of <span class="html-italic">D<sub>p</sub></span> and <span class="html-italic">FAR</span> for selected methods in 7 sequences, respectively. The green triangle, blue triangle, yellow “x”, and red dots in the figures denote the results of ITDBE, MRMF, TFMSER, and the proposed single-frame detection method, respectively.</p>
Full article ">
24 pages, 5652 KiB  
Article
Detection of COVID-19: A Metaheuristic-Optimized Maximally Stable Extremal Regions Approach
by Víctor García-Gutiérrez, Adrián González, Erik Cuevas, Fernando Fausto and Marco Pérez-Cisneros
Symmetry 2024, 16(7), 870; https://doi.org/10.3390/sym16070870 - 9 Jul 2024
Viewed by 1179
Abstract
The challenges associated with conventional methods of COVID-19 detection have prompted the exploration of alternative approaches, including the analysis of lung X-ray images. This paper introduces a novel algorithm designed to identify abnormalities in X-ray images indicative of COVID-19 by combining the maximally [...] Read more.
The challenges associated with conventional methods of COVID-19 detection have prompted the exploration of alternative approaches, including the analysis of lung X-ray images. This paper introduces a novel algorithm designed to identify abnormalities in X-ray images indicative of COVID-19 by combining the maximally stable extremal regions (MSER) method with metaheuristic algorithms. The MSER method is efficient and effective under various adverse conditions, utilizing symmetry as a key property to detect regions despite changes in scaling or lighting. However, calibrating the MSER method is challenging. Our approach transforms this calibration into an optimization task, employing metaheuristic algorithms such as Particle Swarm Optimization (PSO), Grey Wolf Optimizer (GWO), Firefly (FF), and Genetic Algorithms (GA) to find the optimal parameters for MSER. By automating the calibration process through metaheuristic optimization, we overcome the primary disadvantage of the MSER method. This innovative combination enables precise detection of abnormal regions characteristic of COVID-19 without the need for extensive datasets of labeled training images, unlike deep learning methods. Our methodology was rigorously tested across multiple databases, and the detection quality was evaluated using various indices. The experimental results demonstrate the robust capability of our algorithm to support healthcare professionals in accurately detecting COVID-19, highlighting its significant potential and effectiveness as a practical and efficient alternative for medical diagnostics and precise image analysis. Full article
(This article belongs to the Special Issue Symmetry and Metaheuristic Algorithms)
Show Figures

Figure 1

Figure 1
<p>Evolutionary process, showcasing the gradual changes in the image across various threshold levels.</p>
Full article ">Figure 2
<p>Process of varying the threshold values that define a region as stable. (<b>a</b>) 3D representation of the image and (<b>b</b>) interval of threshold values from which a profile is stable.</p>
Full article ">Figure 3
<p>Representation of the process performed by the proposed methodology.</p>
Full article ">Figure 4
<p>Images used in the computational experiments; images 1–3 are images of healthy individuals and images 4 and 5 are images corresponding to individuals with COVID-19.</p>
Full article ">Figure 5
<p>Results achieved by the MSER-PSO, MSER-FF, MSER-GWO, and MSER-GA detection methods over healthy images.</p>
Full article ">Figure 6
<p>Results from the detection methods applied to radiographs of patients infected with COVID-19.</p>
Full article ">Figure 7
<p>Distributions for the (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>S</mi> <mi>N</mi> <mi>R</mi> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>S</mi> <mi>I</mi> <mi>M</mi> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>S</mi> <mi>I</mi> <mi>M</mi> </mrow> </semantics></math> indices.</p>
Full article ">Figure 8
<p>Averaged computational time invested by each approach.</p>
Full article ">
17 pages, 8769 KiB  
Article
An Obstacle Detection Method Based on Longitudinal Active Vision
by Shuyue Shi, Juan Ni, Xiangcun Kong, Huajian Zhu, Jiaze Zhan, Qintao Sun and Yi Xu
Sensors 2024, 24(13), 4407; https://doi.org/10.3390/s24134407 - 7 Jul 2024
Viewed by 931
Abstract
The types of obstacles encountered in the road environment are complex and diverse, and accurate and reliable detection of obstacles is the key to improving traffic safety. Traditional obstacle detection methods are limited by the type of samples and therefore cannot detect others [...] Read more.
The types of obstacles encountered in the road environment are complex and diverse, and accurate and reliable detection of obstacles is the key to improving traffic safety. Traditional obstacle detection methods are limited by the type of samples and therefore cannot detect others comprehensively. Therefore, this paper proposes an obstacle detection method based on longitudinal active vision. The obstacles are recognized according to the height difference characteristics between the obstacle imaging points and the ground points in the image, and the obstacle detection in the target area is realized without accurately distinguishing the obstacle categories, which reduces the spatial and temporal complexity of the road environment perception. The method of this paper is compared and analyzed with the obstacle detection methods based on VIDAR (vision-IMU based detection and range method), VIDAR + MSER, and YOLOv8s. The experimental results show that the method in this paper has high detection accuracy and verifies the feasibility of obstacle detection in road environments where unknown obstacles exist. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>An obstacle detection based on longitudinal active vision.</p>
Full article ">Figure 2
<p>Obstacle ranging model.</p>
Full article ">Figure 3
<p>Schematic diagram of the static obstacle imaging.</p>
Full article ">Figure 4
<p>Schematic diagram of dynamic obstacle imaging.</p>
Full article ">Figure 5
<p>Schematic diagram of the camera rotation.</p>
Full article ">Figure 6
<p>Architecture of the longitudinal active camera obstacle detection system.</p>
Full article ">Figure 7
<p>Steering angles corresponding to different radii of rotation.</p>
Full article ">Figure 8
<p>Distance measurements corresponding to the different steering angles.</p>
Full article ">Figure 9
<p>Two-frame image acquisition before and after camera rotation; (<b>a</b>) is the obstacle image at the initial moment, (<b>b</b>) is the feature region extraction based on MSER, and (<b>c</b>) is the feature point extraction, where red * is the lowest point of each extreme region and blue + is the intersection point of the obstacle and the road plane. (<b>d</b>) The second frame image acquired after camera rotation.</p>
Full article ">Figure 10
<p>MSERs feature region extraction; (<b>a</b>) is the obstacle image in the initial moment, (<b>b</b>) the obstacle image in the next moment, and (<b>c</b>) is the region matching image in the two moments before and after, where the red region and + are the center of mass of MSERs and MSERs in the initial moment in the image, and the cyan region and o are the center of mass of MSERs and MSERs in the next moment in the image.</p>
Full article ">Figure 11
<p>Feature point location; (<b>a</b>) shows the location of the feature point located in the image at the initial moment and (<b>b</b>) shows the location of the feature point located in the image at the next moment.</p>
Full article ">Figure 12
<p>Obstacle area division (where the yellow box is the detected obstacle area and the upper number is the distance from the obstacle to the camera).</p>
Full article ">Figure 13
<p>Experimental equipment for real vehicle.</p>
Full article ">Figure 14
<p>Real vehicle experiment route.</p>
Full article ">Figure 15
<p>Detection results.</p>
Full article ">
24 pages, 8868 KiB  
Article
Unmanned Aerial Vehicle-Based Structural Health Monitoring and Computer Vision-Aided Procedure for Seismic Safety Measures of Linear Infrastructures
by Luna Ngeljaratan, Elif Ecem Bas and Mohamed A. Moustafa
Sensors 2024, 24(5), 1450; https://doi.org/10.3390/s24051450 - 23 Feb 2024
Cited by 1 | Viewed by 1979
Abstract
Computer vision in the structural health monitoring (SHM) field has become popular, especially for processing unmanned aerial vehicle (UAV) data, but still has limitations both in experimental testing and in practical applications. Prior works have focused on UAV challenges and opportunities for the [...] Read more.
Computer vision in the structural health monitoring (SHM) field has become popular, especially for processing unmanned aerial vehicle (UAV) data, but still has limitations both in experimental testing and in practical applications. Prior works have focused on UAV challenges and opportunities for the vibration-based SHM of buildings or bridges, but practical and methodological gaps exist specifically for linear infrastructure systems such as pipelines. Since they are critical for the transportation of products and the transmission of energy, a feasibility study of UAV-based SHM for linear infrastructures is essential to ensuring their service continuity through an advanced SHM system. Thus, this study proposes a single UAV for the seismic monitoring and safety assessment of linear infrastructures along with their computer vision-aided procedures. The proposed procedures were implemented in a full-scale shake-table test of a natural gas pipeline assembly. The objectives were to explore the UAV potential for the seismic vibration monitoring of linear infrastructures with the aid of several computer vision algorithms and to investigate the impact of parameter selection for each algorithm on the matching accuracy. The procedure starts by adopting the Maximally Stable Extremal Region (MSER) method to extract covariant regions that remain similar through a certain threshold of image series. The feature of interest is then detected, extracted, and matched using the Speeded-Up Robust Features (SURF) and K-nearest Neighbor (KNN) algorithms. The Maximum Sample Consensus (MSAC) algorithm is applied for model fitting by maximizing the likelihood of the solution. The output of each algorithm is examined for correctness in matching pairs and accuracy, which is a highlight of this procedure, as no studies have ever investigated these properties. The raw data are corrected and scaled to generate displacement data. Finally, a structural safety assessment was performed using several system identification models. These procedures were first validated using an aluminum bar placed on an actuator and tested in three harmonic tests, and then an implementation case study on the pipeline shake-table tests was analyzed. The validation tests show good agreement between the UAV data and reference data. The shake-table test results also generate reasonable seismic performance and assess the pipeline seismic safety, demonstrating the feasibility of the proposed procedure and the prospect of UAV-based SHM for linear infrastructure monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed computer vision procedures for UAV-based seismic SHM for linear infrastructures.</p>
Full article ">Figure 2
<p>Selected features (<math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>P</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>P</mi> <mn>3</mn> </msub> <mo>,</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>G</mi> </mrow> </semantics></math>) from Test 1 (<b>a</b>), Test 2 (<b>b</b>), and Test 3 (<b>c</b>) with the setup on the simulator (<b>e</b>). An example of targeted feature matching with no errors (<b>d</b>) in Test 1.</p>
Full article ">Figure 3
<p>Gray-level distribution and intensity.</p>
Full article ">Figure 4
<p>Detected regions (<b>top</b>) and differences from reference image (<math display="inline"><semantics> <mrow> <msub> <mo>∆</mo> <mrow> <mi>r</mi> <mi>e</mi> <mi>g</mi> <mi>i</mi> <mi>o</mi> <mi>n</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> %, <b>bot</b>.) with respect to MSER threshold delta variations (MSER TH).</p>
Full article ">Figure 5
<p>Detected MSERs and correct pairs from SURF, KNN, and refined MSAC matching concerning threshold delta variations using reference and second images from Test 1.</p>
Full article ">Figure 6
<p>Percentage of correct matches (accuracy (%)) based on SURF 64-D and 128-D and KNN threshold variations (KNN TH).</p>
Full article ">Figure 7
<p>Correct pairs and matching accuracies with SURF and MSAC algorithms based on SURF 64-D and 128-D with MSER threshold delta variations (MSER TH).</p>
Full article ">Figure 8
<p>Number of correct pair matches with their respective accuracies (%) based on MSAC threshold (MSAC TH) with MSER threshold delta variations (MSER TH).</p>
Full article ">Figure 9
<p>Point matching pairs with their respective MSAC thresholds. The example is taken from Test 1 and shows selected points <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mn>1</mn> </msub> <mo>,</mo> <mo> </mo> <msub> <mi>P</mi> <mn>2</mn> </msub> <mo>,</mo> <mo> </mo> <msub> <mi>P</mi> <mn>3</mn> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>G</mi> </mrow> </semantics></math> in the specimen area and unidentified points.</p>
Full article ">Figure 10
<p>Point matching pairs from Tests 2 and 3 and selected points to measure displacement.</p>
Full article ">Figure 11
<p>Displacement response results, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif-bold-italic">δ</mi> <mi mathvariant="bold-italic">x</mi> </msub> </mrow> </semantics></math>, from validation Tests 1, 2, and 3.</p>
Full article ">Figure 12
<p>AR spectrum and natural frequency of specimen measured by validation Tests 1–3.</p>
Full article ">Figure 13
<p>Seismic testing setup showing the UAV position during tests, pipeline position on the biaxial shake table, and selected points to generate the seismic response.</p>
Full article ">Figure 14
<p>Computer vision algorithm results from pipeline test. (<b>a</b>) Feature of interests, (<b>b</b>) Detected MSER, (<b>c</b>) SURF and KNN matching, (<b>d</b>) Refined matching results using MSAC.</p>
Full article ">Figure 15
<p>Pipeline seismic responses in lateral and biaxial directions.</p>
Full article ">Figure 16
<p>Frequency response and stabilization plots of pipeline system in lateral and longitudinal directions.</p>
Full article ">
25 pages, 9041 KiB  
Article
MuA-SAR Fast Imaging Based on UCFFBP Algorithm with Multi-Level Regional Attention Strategy
by Fanyun Xu, Rufei Wang, Yulin Huang, Deqing Mao, Jianyu Yang, Yongchao Zhang and Yin Zhang
Remote Sens. 2023, 15(21), 5183; https://doi.org/10.3390/rs15215183 - 30 Oct 2023
Cited by 1 | Viewed by 1075
Abstract
Multistatic airborne SAR (MuA-SAR) benefits from the ability to flexibly adjust the positions of multiple transmitters and receivers in space, which can shorten the synthetic aperture time to achieve the required resolution. To ensure both imaging efficiency and quality of different system spatial [...] Read more.
Multistatic airborne SAR (MuA-SAR) benefits from the ability to flexibly adjust the positions of multiple transmitters and receivers in space, which can shorten the synthetic aperture time to achieve the required resolution. To ensure both imaging efficiency and quality of different system spatial configurations and trajectories, the fast factorized back projection (FFBP) algorithm is proposed. However, if the FFBP algorithm based on polar coordinates is directly applied to the MuA-SAR system, the interpolation in the recursive fusion process will bring the problem of redundant calculations and error accumulation, leading to a sharp decrease in imaging efficiency and quality. In this paper, a unified Cartesian fast factorized back projection (UCFFBP) algorithm with a multi-level regional attention strategy is proposed for MuA-SAR fast imaging. First, a global Cartesian coordinate system (GCCS) is established. Through designing the rotation mapping matrix and phase compensation factor, data from different bistatic radar pairs can be processed coherently and efficiently. In addition, a multi-level regional attention strategy based on maximally stable extremal regions (MSER) is proposed. In the recursive fusion process, only the suspected target regions are paid more attention and segmented for coherent fusion at each fusion level, which further improves efficiency. The proposed UCFFBP algorithm ensures both the quality and efficiency of MuA-SAR imaging. Simulation experiments verified the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial geometric configuration of MuA-SAR system.</p>
Full article ">Figure 2
<p>Distribution principle of WS.</p>
Full article ">Figure 3
<p>WS in different states. (<b>a</b>) The <math display="inline"><semantics> <msub> <mi mathvariant="normal">k</mi> <mi mathvariant="normal">a</mi> </msub> </semantics></math> directions of WS of different receivers are inconsistent, (<b>b</b>) the <math display="inline"><semantics> <msub> <mi mathvariant="normal">k</mi> <mi mathvariant="normal">a</mi> </msub> </semantics></math> directions of WS of different receivers are consistent.</p>
Full article ">Figure 4
<p>Shifting and folding of sub-aperture WS and the analysis of aliasing phenomenon of the WS. (<b>a</b>) Shifting and folding of sub-aperture WS of center point target, (<b>b</b>) the WS distribution when the grid division resolution is equal to the theoretical value, (<b>c</b>) the WS distribution when the grid division resolution is much higher than the theoretical value, (<b>d</b>) BP imaging results of randomly distributed point targets corresponding to the WS in (<b>b</b>), (<b>e</b>) BP imaging results of randomly distributed point targets corresponding to the WS in (<b>c</b>).</p>
Full article ">Figure 5
<p>Flowchart of the proposed imaging algorithm.</p>
Full article ">Figure 6
<p>Schematic diagram of rotated coordinate system. (<b>a</b>) The rotated new Cartesian coordinate system <math display="inline"><semantics> <mrow> <mi>u</mi> <mi>O</mi> <mi>v</mi> </mrow> </semantics></math>, (<b>b</b>) the transmitter’s local coordinate system <math display="inline"><semantics> <mrow> <msup> <mi>u</mi> <mo>′</mo> </msup> <mi>O</mi> <msup> <mi>v</mi> <mo>′</mo> </msup> </mrow> </semantics></math> and the receiver’s local coordinate system <math display="inline"><semantics> <mrow> <msup> <mi>u</mi> <mrow> <mo>″</mo> </mrow> </msup> <mi>O</mi> <msup> <mi>v</mi> <mrow> <mo>″</mo> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Schematic diagram of image segmentation method based on MSER.</p>
Full article ">Figure 8
<p>Target distribution map of point target scene.</p>
Full article ">Figure 9
<p>Comparison of point target imaging performance. (<b>a</b>) BP algorithm result, (<b>b</b>) FFBP algorithm result, (<b>c</b>) imaging result of the proposed UCFFBP algorithm, (<b>d</b>) profile of the imaging result of <math display="inline"><semantics> <msub> <mi mathvariant="bold">P</mi> <mn>1</mn> </msub> </semantics></math> along the azimuth direction, (<b>e</b>) profile of the imaging result of <math display="inline"><semantics> <msub> <mi mathvariant="bold">P</mi> <mn>2</mn> </msub> </semantics></math> along the azimuth direction, (<b>f</b>) profile of the imaging result of <math display="inline"><semantics> <msub> <mi mathvariant="bold">P</mi> <mn>1</mn> </msub> </semantics></math> along the range direction, (<b>g</b>) profile of the imaging result of <math display="inline"><semantics> <msub> <mi mathvariant="bold">P</mi> <mn>2</mn> </msub> </semantics></math> along the range direction.</p>
Full article ">Figure 10
<p>Comparison of 2D surface target imaging performance. (<b>a</b>) BP algorithm result, (<b>b</b>) FFBP algorithm result, (<b>c</b>) imaging result of the proposed UCFFBP algorithm.</p>
Full article ">Figure 11
<p>Comparison of processing time of 2D surface targets with different valid pixel ratios.</p>
Full article ">
21 pages, 6210 KiB  
Article
The Design of a Video Reflection Removal Method Based on Illumination Compensation and Image Completion Fusion
by Shaohong Ding, Yi Xu, Xiangcun Kong, Shuyue Shi and Juan Ni
Appl. Sci. 2023, 13(19), 10913; https://doi.org/10.3390/app131910913 - 1 Oct 2023
Viewed by 1456
Abstract
Our objective is to develop a video reflection removal algorithm that is both easy to compute and effective. Unlike previous methods that depend on machine learning, our approach proposes a local image reflection removal technique that combines image completion and lighting compensation. To [...] Read more.
Our objective is to develop a video reflection removal algorithm that is both easy to compute and effective. Unlike previous methods that depend on machine learning, our approach proposes a local image reflection removal technique that combines image completion and lighting compensation. To achieve this, we utilized the MSER image region feature point matching method to reduce image processing time and the spatial area of layer separation regions. In order to improve the adaptability of our method, we implemented a local image reflection removal technique that utilizes image completion and lighting compensation to interpolate layers and update motion field data in real-time. Our approach is both simple and efficient, allowing us to quickly obtain reflection-free video sequences under a variety of lighting conditions. This enabled us to achieve real-time detection effects through video restoration. This experiment has confirmed the efficacy of our method and demonstrated its comparable performance to advanced methods. Full article
Show Figures

Figure 1

Figure 1
<p>Physical (<b>left</b>) and mathematical (<b>middle</b>) image formation models of the image reflection removal method and the actual experimental presentation scenario (<b>right</b>). Types I and II ignore the refraction effect of thicker glass, while type III exists with thicker glass, which reflects and refracts light from the glass front object to different locations and exhibits different intensities, respectively, leading to the superimposed image phenomenon of the mirror front object in the reflective layer.</p>
Full article ">Figure 2
<p>An overview of our method. The blue module is used for video framing operations, selecting images with reflective areas, calculating the number of images used in the image sequence (represented by the green module), and then adding lighting compensation for image completion (represented by the orange module). Reflection removal is achieved through the difference between the image and the reflection layer (represented by the pink module). Finally, image collection and video restoration (represented by the purple module) are carried out.</p>
Full article ">Figure 3
<p>A flowchart of the reflective region removal method based on the combination of image complementation and illumination compensation.</p>
Full article ">Figure 4
<p>A diagram of the unchanged reflection area.</p>
Full article ">Figure 5
<p>The results of an image complementation-based reflective region removal method.</p>
Full article ">Figure 6
<p>Experimental equipment.</p>
Full article ">Figure 7
<p>Partial dataset.</p>
Full article ">Figure 7 Cont.
<p>Partial dataset.</p>
Full article ">Figure 8
<p>A comparison of five reflection removal methods under a normal driving environment.</p>
Full article ">Figure 9
<p>A comparison of five reflection removal methods under normal lighting conditions.</p>
Full article ">Figure 10
<p>The results of five reflection removal methods under light conditions where (<b>a</b>) is the removal result of the reflection region removal method based on relative motion, (<b>b</b>) is the removal result of the black box video reflection removal, (<b>c</b>) is the removal result of the reflection region removal method based on image completion, (<b>d</b>) is the removal result based on the Static Video method, and (<b>e</b>) is the removal result of the reflection region removal method based on the combination of image completion and light compensation.</p>
Full article ">Figure 11
<p>The plot of the results of five reflection removal methods in a normal driving environment where (<b>a</b>) is the removal result of the reflection region removal method based on relative motion, (<b>b</b>) is the removal result of the black box video reflection removal, (<b>c</b>) is the removal result of the reflection region removal method based on image completion, (<b>d</b>) is the removal result based on the Static Video method, and (<b>e</b>) is the removal result of the reflection region removal method based on the combination of image completion and light compensation.</p>
Full article ">
22 pages, 1528 KiB  
Article
Automating Assessment and Providing Personalized Feedback in E-Learning: The Power of Template Matching
by Zainab R. Alhalalmeh, Yasser M. Fouda, Muhammad A. Rushdi and Moawwad El-Mikkawy
Sustainability 2023, 15(19), 14234; https://doi.org/10.3390/su151914234 - 26 Sep 2023
Viewed by 1515
Abstract
This research addressed the need to enhance template-matching performance in e-learning and automated assessments within Egypt’s evolving educational landscape, marked by the importance of e-learning during the COVID-19 pandemic. Despite the widespread adoption of e-learning, robust template-matching feedback mechanisms should still be developed [...] Read more.
This research addressed the need to enhance template-matching performance in e-learning and automated assessments within Egypt’s evolving educational landscape, marked by the importance of e-learning during the COVID-19 pandemic. Despite the widespread adoption of e-learning, robust template-matching feedback mechanisms should still be developed for personalization, engagement, and learning outcomes. This study augmented the conventional best-buddies similarity (BBS) approach with four feature descriptors, Harris, scale-invariant feature transform (SIFT), speeded-up robust features (SURF), and maximally stable extremal regions (MSER), to enhance template-matching performance in e-learning. We systematically selected algorithms, integrated them into enhanced BBS schemes, and assessed their effectiveness against a baseline BBS approach using challenging data samples. A systematic algorithm selection process involving multiple reviewers was employed. Chosen algorithms were integrated into enhanced BBS schemes and rigorously evaluated. The results showed that the proposed schemes exhibited enhanced template-matching performance, suggesting potential improvements in personalization, engagement, and learning outcomes. Further, the study highlights the importance of robust template-matching feedback in e-learning, offering insights into improving educational quality. The findings enrich e-learning experiences, suggesting avenues for refining e-learning platforms and positively impacting the Egyptian education sector. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of template-matching results. The matching process results for the same pair of images show the overlap, the matched region, and the score map of matching. (<b>a</b>) The result of the baseline BBS algorithm (the upper matching template), and (<b>b</b>) the result of the proposed BBS algorithm (the lower matching template).</p>
Full article ">Figure 2
<p>Each of the five proposed algorithms enhanced the number of pairs whose overlap percentages compared to those of the baseline BBS algorithm.</p>
Full article ">Figure 3
<p>Box plots of the running times (in seconds) for the five proposed and baseline BBS algorithms.</p>
Full article ">Figure 4
<p>Relative enhancements of the overlap percentages for each of the five proposed algorithms compared to those of the baseline BBS algorithm.</p>
Full article ">
23 pages, 37642 KiB  
Article
Automated Georectification, Mosaicking and 3D Point Cloud Generation Using UAV-Based Hyperspectral Imagery Observed by Line Scanner Imaging Sensors
by Anthony Finn, Stefan Peters, Pankaj Kumar and Jim O’Hehir
Remote Sens. 2023, 15(18), 4624; https://doi.org/10.3390/rs15184624 - 20 Sep 2023
Cited by 2 | Viewed by 1463
Abstract
Hyperspectral sensors mounted on unmanned aerial vehicles (UAV) offer the prospect of high-resolution multi-temporal spectral analysis for a range of remote-sensing applications. However, although accurate onboard navigation sensors track the moment-to-moment pose of the UAV in flight, geometric distortions are introduced into the [...] Read more.
Hyperspectral sensors mounted on unmanned aerial vehicles (UAV) offer the prospect of high-resolution multi-temporal spectral analysis for a range of remote-sensing applications. However, although accurate onboard navigation sensors track the moment-to-moment pose of the UAV in flight, geometric distortions are introduced into the scanned data sets. Consequently, considerable time-consuming (user/manual) post-processing rectification effort is generally required to retrieve geometrically accurate mosaics of the hyperspectral data cubes. Moreover, due to the line-scan nature of many hyperspectral sensors and their intrinsic inability to exploit structure from motion (SfM), only 2D mosaics are generally created. To address this, we propose a fast, automated and computationally robust georectification and mosaicking technique that generates 3D hyperspectral point clouds. The technique first morphologically and geometrically examines (and, if possible, repairs) poorly constructed individual hyperspectral cubes before aligning these cubes into swaths. The luminance of each individual cube is estimated and normalised, prior to being integrated into a swath of images. The hyperspectral swaths are co-registered to a targeted element of a luminance-normalised orthomosaic obtained using a standard red–green–blue (RGB) camera and SfM. To avoid computationally intensive image processing operations such as 2D convolutions, key elements of the orthomosaic are identified using pixel masks, pixel index manipulation and nearest neighbour searches. Maximally stable extremal regions (MSER) and speeded-up robust feature (SURF) extraction are then combined with maximum likelihood sample consensus (MLESAC) feature matching to generate the best geometric transformation model for each swath. This geometrically transforms and merges individual pushbroom scanlines into a single spatially continuous hyperspectral mosaic; and this georectified 2D hyperspectral mosaic is then converted into a 3D hyperspectral point cloud by aligning the hyperspectral mosaic with the RGB point cloud used to create the orthomosaic obtained using SfM. A high spatial accuracy is demonstrated. Hyperspectral mosaics with a 5 cm spatial resolution were mosaicked with root mean square positional accuracies of 0.42 m. The technique was tested on five scenes comprising two types of landscape. The entire process, which is coded in MATLAB, takes around twenty minutes to process data sets covering around 30 Ha at a 5 cm resolution on a laptop with 32 GB RAM and an Intel® Core i7-8850H CPU running at 2.60 GHz. Full article
Show Figures

Figure 1

Figure 1
<p>Orthomosaic of the site Goat Farm B.</p>
Full article ">Figure 2
<p>Orthomosaic of the PEGS site Hale.</p>
Full article ">Figure 3
<p>Workflow of methodology employed in this study. Colour code indicates the software packages used to create output (red lines indicate output is derived from the MetaShape). All software uses batch mode processing to minimise user effort.</p>
Full article ">Figure 4
<p>Graphical depiction of the hyperspectral image/cube, <math display="inline"><semantics> <mrow> <mi>i</mi> </mrow> </semantics></math>. Images are combined and processed together into <span class="html-italic">n</span> groups, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>H</mi> </mrow> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> <mo> </mo> <mo>…</mo> <mo> </mo> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>…</mo> <mi>n</mi> </mrow> </msubsup> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> </mrow> </semantics></math> is the number of images in the <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>i</mi> </mrow> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msup> </mrow> </semantics></math> swath and a swath is a linear combination of contiguous images.</p>
Full article ">Figure 5
<p>Graphical depiction of swaths, comprising hypercubes/imagery, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>H</mi> </mrow> <mrow> <mn>1</mn> </mrow> <mrow> <mn>1</mn> </mrow> </msubsup> </mrow> </semantics></math> …, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>H</mi> </mrow> <mrow> <mn>14</mn> </mrow> <mrow> <mn>2</mn> </mrow> </msubsup> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Uncorrected hyperspectral mosaic formed from raw GNSS data. The red boxes indicate regions from which the zoomed regions are taken and shown in <a href="#remotesensing-15-04624-f007" class="html-fig">Figure 7</a> and <a href="#remotesensing-15-04624-f008" class="html-fig">Figure 8</a>.</p>
Full article ">Figure 7
<p>Zoomed regions of <a href="#remotesensing-15-04624-f006" class="html-fig">Figure 6</a>, showing (<b>left</b>) mismatched features between swaths and (<b>right</b>) features observed twice in contiguous images.</p>
Full article ">Figure 8
<p>Zoomed regions of <a href="#remotesensing-15-04624-f006" class="html-fig">Figure 6</a>, showing gaps between contiguous along- and across-track images.</p>
Full article ">Figure 9
<p>Zoomed regions of the Goat Farm B site showing discontinuities in lines of two-year-old trees.</p>
Full article ">Figure 10
<p>Georectified hyperspectral mosaic formed using the workflow described in this paper (note: a few single images at the ends of some swaths shown in <a href="#remotesensing-15-04624-f009" class="html-fig">Figure 9</a> have been auto rejected). Also, the very narrow side overlap (&lt;5%) has resulted in a few gaps between swaths.</p>
Full article ">Figure 11
<p>Zoomed regions of the georectified hyperspectral mosaic (taken from <a href="#remotesensing-15-04624-f010" class="html-fig">Figure 10</a>), showing the improved alignment of features between swaths and features that were observed twice.</p>
Full article ">Figure 12
<p>Zoomed regions of the georectified hyperspectral mosaic (taken from <a href="#remotesensing-15-04624-f010" class="html-fig">Figure 10</a>), showing the improved edge alignment for contiguous images in both the along- and across-track directions.</p>
Full article ">Figure 13
<p>Zoomed regions of the Goat Farm B data set showing georectified lines of two-year-old trees. These images should be compared to those shown in <a href="#remotesensing-15-04624-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 14
<p>The 3D point cloud of NDVI for the PEGS Hale site. The elevated levels of NDVI can be seen to accurately align with the vegetation structures such as the tall trees (at the back of the site).</p>
Full article ">Figure 15
<p>Zoomed regions of <a href="#remotesensing-15-04624-f014" class="html-fig">Figure 14</a> showing the 3D point cloud of NDVI for the PEGS Hale site.</p>
Full article ">Figure 16
<p>Zoomed regions of the Goat Farm B site showing the NDVI point cloud derived from the auto-rectified workflow process. The darker regions depict higher values of NDVI, which align well with the 3D structures indicating young trees. The lighter regions depict lower values of NDVI, which generally correspond with the surface of the terrain.</p>
Full article ">
19 pages, 5055 KiB  
Article
VIDAR-Based Road-Surface-Pothole-Detection Method
by Yi Xu, Teng Sun, Shaohong Ding, Jinxin Yu, Xiangcun Kong, Juan Ni and Shuyue Shi
Sensors 2023, 23(17), 7468; https://doi.org/10.3390/s23177468 - 28 Aug 2023
Cited by 2 | Viewed by 1992
Abstract
This paper presents a VIDAR (a Vision-IMU based detection and ranging method)-based approach to road-surface pothole detection. Most potholes on the road surface are caused by the further erosion of cracks in the road surface, and tires, wheels and bearings of vehicles are [...] Read more.
This paper presents a VIDAR (a Vision-IMU based detection and ranging method)-based approach to road-surface pothole detection. Most potholes on the road surface are caused by the further erosion of cracks in the road surface, and tires, wheels and bearings of vehicles are damaged to some extent as they pass through the potholes. To ensure the safety and stability of vehicle driving, we propose a VIDAR-based pothole-detection method. The method combines vision with IMU to filter, mark and frame potholes on flat pavements using MSER to calculate the width, length and depth of potholes. By comparing it with the classical method and using the confusion matrix to judge the correctness, recall and accuracy of the method proposed in this paper, it is verified that the method proposed in this paper can improve the accuracy of monocular vision in detecting potholes in road surfaces. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Three-dimensional obstacle small-hole imaging principle—diagram.</p>
Full article ">Figure 2
<p>Static obstacle imaging schematic.</p>
Full article ">Figure 3
<p>Dynamic obstacle imaging schematic.</p>
Full article ">Figure 4
<p>Road-surface pothole treatment diagram.</p>
Full article ">Figure 5
<p>Road-surface-pothole width-calculation-method chart.</p>
Full article ">Figure 6
<p>Roadway pothole length-calculation-method chart.</p>
Full article ">Figure 7
<p>Road-surface pothole depth-calculation-method chart.</p>
Full article ">Figure 8
<p>Schematic diagram of road-surface pothole depth update.</p>
Full article ">Figure 9
<p>Flow chart of VIDAR-based road-surface pothole detection.</p>
Full article ">Figure 10
<p>Schematic diagram of road-surface potholes under the simulation experiment.</p>
Full article ">Figure 11
<p>Feature point extraction and rectangular-box marking.</p>
Full article ">Figure 12
<p>Feature point tracking and image matching.</p>
Full article ">Figure 13
<p>Experimental vehicle equipment diagram.</p>
Full article ">Figure 14
<p>Camera calibration and alignment process.</p>
Full article ">Figure 15
<p>Selected results of pothole detection under the real environment.</p>
Full article ">
16 pages, 3130 KiB  
Article
Model Catanionic Vesicles from Biomimetic Serine-Based Surfactants: Effect of the Combination of Chain Lengths on Vesicle Properties and Vesicle-to-Micelle Transition
by Isabel S. Oliveira, Sandra G. Silva, Maria Luísa do Vale and Eduardo F. Marques
Membranes 2023, 13(2), 178; https://doi.org/10.3390/membranes13020178 - 1 Feb 2023
Cited by 1 | Viewed by 2421
Abstract
Mixtures of cationic and anionic surfactants often originate bilayer structures, such as vesicles and lamellar liquid crystals, that can be explored as model membranes for fundamental studies or as drug and gene nanocarriers. Here, we investigated the aggregation properties of two catanionic mixtures [...] Read more.
Mixtures of cationic and anionic surfactants often originate bilayer structures, such as vesicles and lamellar liquid crystals, that can be explored as model membranes for fundamental studies or as drug and gene nanocarriers. Here, we investigated the aggregation properties of two catanionic mixtures containing biomimetic surfactants derived from serine. The mixtures are designated as 12Ser/8-8Ser and 14Ser/10-10Ser, where mSer is a cationic, single-chained surfactant and n-nSer is an anionic, double-chained one (m and n being the C atoms in the alkyl chains). Our goal was to investigate the effects of total chain length and chain length asymmetry of the catanionic pair on the formation of catanionic vesicles, the vesicle properties and the vesicle/micelle transitions. Ocular observations, surface tension measurements, video-enhanced light microscopy, cryogenic scanning electron microscopy, dynamic and electrophoretic light scattering were used to monitor the self-assembly process and the aggregate properties. Catanionic vesicles were indeed found in both systems for molar fractions of cationic surfactant ≥0.40, always possessing positive zeta potentials (ζ = +35–50 mV), even for equimolar sample compositions. Furthermore, the 14Ser/10-10Ser vesicles were only found as single aggregates (i.e., without coexisting micelles) in a very narrow compositional range and as a bimodal population (average diameters of 80 and 300 nm). In contrast, the 12Ser/8-8Ser vesicles were found for a wider sample compositional range and as unimodal or bimodal populations, depending on the mixing ratio. The aggregate size, pH and zeta potential of the mixtures were further investigated. The unimodal 12Ser/8-8Ser vesicles (<DH> ≈ 250 nm, pH ≈ 7–8, ζ ≈ +32 mV and a cationic/anionic molar ratio of ≈2:1) are particularly promising for application as drug/gene nanocarriers. Both chain length asymmetry and total length play a key role in the aggregation features of the two systems. Molecular insights are provided by the main findings. Full article
(This article belongs to the Special Issue Study on Drug-Membrane Interactions, Volume II)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Molecular structure of the serine-based surfactants that form the catanionic mixtures: (<b>a</b>) 12Ser/8-8Ser system; (<b>b</b>) 14Ser/10-10Ser system. The letters n and m represent the number of C atoms in the hydrocarbon chains; m − n is thus the chain length asymmetry and m + 2n is the total chain length.</p>
Full article ">Figure 2
<p>Phase maps and aggregate imaging of the 12Ser/8-8Ser and14Ser/10-10Ser catanionic mixtures, at 0.5 wt% total surfactant and 25.0 °C; <span class="html-italic">x</span><sub>12Ser</sub> and <span class="html-italic">x</span><sub>14Ser</sub> represent the molar fraction of the respective cationic surfactant in the mixture. (<b>A</b>–<b>C</b>) are VELM micrographs showing giant µm-sized vesicles; (<b>A′</b>–<b>C′</b>) are cryo-SEM micrographs.</p>
Full article ">Figure 3
<p>Plots of the average hydrodynamic diameter of the aggregates as measured by DLS vs. molar fraction of cationic surfactant for the two mixtures: (<b>a</b>) 12Ser/8-8Ser and (<b>b</b>) 14Ser/10-10Ser. For some compositions, the DLS (intensity-weighted) size distributions are shown: <span class="html-italic">x</span><sub>Ser12</sub> = 0, 0.50, 0.60 and 0.90 for 12Ser/8-8Ser; <span class="html-italic">x</span><sub>Ser14</sub> = 0 and 0.50 for 14Ser/10-10Ser.</p>
Full article ">Figure 4
<p>Zeta potential (<b>a</b>) and pH (<b>b</b>) of 12Ser/8-8Ser and14Ser/10-10Ser catanionic mixtures, at 0.5 wt% total surfactant and 25 °C.</p>
Full article ">Figure 5
<p>Surface tension plots at 25 °C for the (<b>a</b>) 12Ser/8-8Ser and (<b>b</b>) 14Ser/10-10Ser catanionic mixtures and their respective neat surfactants.</p>
Full article ">Scheme 1
<p>Synthetic routes for the serine-based surfactants.</p>
Full article ">
20 pages, 2110 KiB  
Article
On the Kavya–Manoharan–Burr X Model: Estimations under Ranked Set Sampling and Applications
by Osama H. Mahmoud Hassan, Ibrahim Elbatal, Abdullah H. Al-Nefaie and Mohammed Elgarhy
J. Risk Financial Manag. 2023, 16(1), 19; https://doi.org/10.3390/jrfm16010019 - 28 Dec 2022
Cited by 9 | Viewed by 1806
Abstract
A new two-parameter model is proposed using the Kavya–Manoharan (KM) transformation family and Burr X (BX) distribution. The new model is called the Kavya–Manoharan–Burr X (KMBX) model. The statistical properties are obtained, involving the quantile (QU) function, [...] Read more.
A new two-parameter model is proposed using the Kavya–Manoharan (KM) transformation family and Burr X (BX) distribution. The new model is called the Kavya–Manoharan–Burr X (KMBX) model. The statistical properties are obtained, involving the quantile (QU) function, moment (MOs), incomplete MOs, conditional MOs, MO-generating function, and entropy. Based on simple random sampling (SiRS) and ranked set sampling (RaSS), the model parameters are estimated via the maximum likelihood (MLL) method. A simulation experiment is used to compare these estimators based on the bias (BI), mean square error (MSER), and efficiency. The estimates conducted using RaSS tend to be more efficient than the estimates based on SiRS. The importance and applicability of the KMBX model are demonstrated using three different data sets. Some of the useful actuarial risk measures, such as the value at risk and conditional value at risk, are discussed. Full article
(This article belongs to the Special Issue Stochastic Modeling and Statistical Analysis of Financial Data)
Show Figures

Figure 1

Figure 1
<p>The pdf plots of the KMB<sub>X</sub> model.</p>
Full article ">Figure 2
<p>The hrf plots of the KMB<sub>X</sub> model.</p>
Full article ">Figure 3
<p>The profile log-likelihood plot for the first data set.</p>
Full article ">Figure 4
<p>The profile log-likelihood plot for the second data set.</p>
Full article ">Figure 5
<p>The profile log-likelihood plot for the third data set.</p>
Full article ">Figure 6
<p>The fitted cdf, pdf, and pp plots and the estimated plot for the first data set.</p>
Full article ">Figure 7
<p>The fitted cdf, pdf, and pp plots and the estimated plot for the second data.</p>
Full article ">Figure 8
<p>The fitted cdf, pdf, and pp plots and the estimated plot for the third data.</p>
Full article ">
17 pages, 280 KiB  
Article
Social and Environmental Regulations and Corporate Innovation
by Zhi Cao and Yinping Mu
Sustainability 2022, 14(23), 16275; https://doi.org/10.3390/su142316275 - 6 Dec 2022
Cited by 4 | Viewed by 1762
Abstract
In this study, we investigate the effects of mandatory social and environmental regulations (MSER) on firm innovation. In 2008, the Shanghai and Shenzhen Stock Exchange in China published regulations that mandate some public firms to disclose their social and environmental governance information in [...] Read more.
In this study, we investigate the effects of mandatory social and environmental regulations (MSER) on firm innovation. In 2008, the Shanghai and Shenzhen Stock Exchange in China published regulations that mandate some public firms to disclose their social and environmental governance information in their annual reports. As the MSER apply only to selected firms, this provides an ideal setting for us to observe the effects of MSER on firm innovation. Using a difference-in-differences with propensity-score-matching methodology, we find that the treatment firms experience a significant increase in innovation in terms of the number of total patents and invention patents. More importantly, we further explore three possible mechanisms underlying this association, that is, the corporate social responsibility (CSR)-improving effect, information-disclosing effect, and market-reaction effect, and demonstrate that this positive relationship is mainly driven by the CSR-improving effect and market-reaction effect, manifesting in an improvement in CSR performance and a decline in transient institutional investors for the treatment firms, respectively. Full article
19 pages, 4627 KiB  
Article
An Improved Differentiable Binarization Network for Natural Scene Street Sign Text Detection
by Manhuai Lu, Yi Leng, Chin-Ling Chen and Qiting Tang
Appl. Sci. 2022, 12(23), 12120; https://doi.org/10.3390/app122312120 - 27 Nov 2022
Cited by 2 | Viewed by 1924
Abstract
The street sign text information from natural scenes usually exists in a complex background environment and is affected by natural light and artificial light. However, most of the current text detection algorithms do not effectively reduce the influence of light and do not [...] Read more.
The street sign text information from natural scenes usually exists in a complex background environment and is affected by natural light and artificial light. However, most of the current text detection algorithms do not effectively reduce the influence of light and do not make full use of the relationship between high-level semantic information and contextual semantic information in the feature extraction network when extracting features from images, and they are ineffective at detecting text in complex backgrounds. To solve these problems, we first propose a multi-channel MSER (Maximally Stable Extreme Regions) method to fully consider color information in text detection, which separates the text area in the image from the complex background, effectively reducing the influence of the complex background and light on street sign text detection. We also propose an enhanced feature pyramid network text detection method, which includes a feature pyramid route enhancement (FPRE) module and a high-level feature enhancement (HLFE) module. The two modules can make full use of the network’s low-level and high-level semantic information to enhance the network’s effectiveness in localizing text information and detecting text with different shapes, sizes, and inclined text. Experiments showed that the F-scores obtained by the method proposed in this paper on ICDAR 2015 (International Conference on Document Analysis and Recognition 2015) dataset, ICDAR2017-MLT (International Conference on Document Analysis and Recognition 2017- Competition on Multi-lingual scene text detection) dataset, and the Natural Scene Street Signs (NSSS) dataset constructed in this study are 89.5%, 84.5%, and 73.3%, respectively, which confirmed the performance advantage of the method proposed in street sign text detection. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition Based on Deep Learning)
Show Figures

Figure 1

Figure 1
<p>Overall flow chart of the method proposed in this paper.</p>
Full article ">Figure 2
<p>Image extraction results for different color channels.</p>
Full article ">Figure 3
<p>The MSER extraction results of different color channels.</p>
Full article ">Figure 4
<p>DBNet model network structure diagram.</p>
Full article ">Figure 5
<p>Network structure diagram of improved DBNet model.</p>
Full article ">Figure 6
<p>Schematic diagram of the network structure of the high-level feature enhancement (HLFE) module.</p>
Full article ">Figure 7
<p>Flow chart of the experiment.</p>
Full article ">Figure 8
<p>Partial images of the NSSS dataset.</p>
Full article ">Figure 9
<p>Comparison of visualization results of the method proposed in this paper and original DBNet model on natural scene street sign (NSSS) dataset: (<b>a</b>) is the result of the proposed method in this paper, and (<b>b</b>) is the result of the original DBNet model.</p>
Full article ">Figure 10
<p>Comparison of visualization results between the method proposed in this paper and the original DBNet model on the ICDAR2015 dataset: (<b>a</b>) is the result of the proposed method in this paper, and (<b>b</b>) is the result of the original DBNet model.</p>
Full article ">Figure 11
<p>Comparison of visualization results between the method proposed in this paper and the original DBNet model on the ICDAR2017-MLT dataset: (<b>a</b>) is the result of the proposed method in this paper, and (<b>b</b>) is the result of the original DBNet model.</p>
Full article ">Figure 12
<p>F-score curves of Baseline and our method on ICDAR2015, ICDAR2017-MLT, and NSSS datasets.</p>
Full article ">
14 pages, 8791 KiB  
Article
An Effective Method for Detection and Recognition of Uyghur Texts in Images with Backgrounds
by Mayire Ibrayim, Ahmatjan Mattohti and Askar Hamdulla
Information 2022, 13(7), 332; https://doi.org/10.3390/info13070332 - 11 Jul 2022
Cited by 10 | Viewed by 2025
Abstract
Uyghur text detection and recognition in images with simple backgrounds is still a challenging task for Uyghur image content analysis. In this paper, we propose a new effective Uyghur text detection method based on channel-enhanced MSERs and the CNN classification model. In order [...] Read more.
Uyghur text detection and recognition in images with simple backgrounds is still a challenging task for Uyghur image content analysis. In this paper, we propose a new effective Uyghur text detection method based on channel-enhanced MSERs and the CNN classification model. In order to extract more complete text components, a new text candidate region extraction algorithm is put forward, which is based on the channel-enhanced MSERs according to the characteristics of Uyghur text. In order to effectively prune the non-text regions, we design a CNN classification network according to the LeNet-5, which gains the description characteristics automatically and avoids the tedious and low efficiency artificial characteristic extraction work. For Uyghur text recognition in images, we improved the traditional CRNN network, and to verify its effectiveness, the networks trained on a synthetic dataset and evaluated on the text recognition datasets. The experimental results indicated that the Uyghur text detection method in this paper is robust and applicable, and the recognition result by improvedCRNN was better than the original CRNN network. Full article
Show Figures

Figure 1

Figure 1
<p>The characteristics of Uyghur text.</p>
Full article ">Figure 2
<p>Flowchart of the proposed method and corresponding results of each step.</p>
Full article ">Figure 3
<p>The network architecture of Uyghur text recognition based on improvement CRNN.</p>
Full article ">Figure 4
<p>The correspondence of Uyghur language and Latin.</p>
Full article ">Figure 5
<p>Examples of the detection dataset. (<b>a</b>) examples of color images; (<b>b</b>) positive samples in training set; (<b>c</b>) negative samples in training set.</p>
Full article ">Figure 6
<p>Examples of the recognition dataset. (<b>a</b>) examples of random synthetic texts images; (<b>b</b>) examples of arbitrary length text in the natural scene images.</p>
Full article ">Figure 7
<p>The network architecture of CNN classification model.</p>
Full article ">Figure 8
<p>The performance of the CNN classification model in the Validation set. Subfigure (<b>a</b>) the accuracy changing curve of the CNN classification model; Subfigure (<b>b</b>) the average loss changing curve of the CNN classification model.</p>
Full article ">
Back to TopTop