[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (20,754)

Search Parameters:
Keywords = detection efficiency

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 4911 KiB  
Article
Overhead Power Line Tension Estimation Method Using Accelerometers
by Sang-Hyun Kim and Kwan-Ho Chun
Energies 2025, 18(1), 181; https://doi.org/10.3390/en18010181 - 3 Jan 2025
Abstract
Overhead power lines are important components of power grids, and the status of transmission line equipment directly affects the safe and reliable operation of power grids. In order to guarantee the reliable operation of lines and efficient usage of the power grid, the [...] Read more.
Overhead power lines are important components of power grids, and the status of transmission line equipment directly affects the safe and reliable operation of power grids. In order to guarantee the reliable operation of lines and efficient usage of the power grid, the tension of overhead power is an important parameter to be measured. The tension of power lines can be calculated from the modal frequency, but the measured acceleration data obtained from the accelerometer is severely contaminated with noises. In this paper, a multiscale-based peak detection (M-AMPD) algorithm is used to find possible modal frequencies in the power spectral density of acceleration data. To obtain a reliable noise-free signal, median absolute deviations with baseline correction (MAD-BS) algorithm are applied. An accurate estimation of modal frequencies used for tension estimation is obtained by iteration of the MAD-BS algorithm and reduction in frequency range technique. The iterative range reduction technique improves the accuracy of the estimated tension of overhead power lines. An accurate estimation of overhead power line tension can contribute to improving the reliability and efficiency of the power grid. The proposed algorithm is implemented in MATLAB R2020a and verified by comparison with measured data by a tensiometer. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

Figure 1
<p>The proposed algorithm.</p>
Full article ">Figure 2
<p>Schematic of experiment.</p>
Full article ">Figure 3
<p>The vibration data of overhead power line for 10 min. (<b>a</b>) Measured acceleration data; (<b>b</b>) PSD by Welch’s method.</p>
Full article ">Figure 4
<p>The results of AMPD algorithm. (<b>a</b>) The points detected as peak in PSD (red circle); (<b>b</b>) local maxima scalogram; (<b>c</b>) row-wise sum and <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">λ</mi> </mrow> </semantics></math> (the led line indicates <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>53</mn> </mrow> </semantics></math>, where calculated row-wise sum <math display="inline"><semantics> <mi>γ</mi> </semantics></math> has a minimum.) (<b>d</b>) rescaled local maxima scalogram (LMS) according to the detected global minimum as shown in (<b>c</b>); (<b>e</b>) calculated row-wise standard deviation of the rescaled LMS.</p>
Full article ">Figure 5
<p>The result of baseline correction. (<b>a</b>) Estimated baseline (black solid) vs. original PSD (Blue solid); (<b>b</b>) baseline corrected signal (PSD–estimated baseline).</p>
Full article ">Figure 6
<p>Iteration of MAD-BS. As the iterations proceed, the range of interest is reduced. However, data becomes more reliable. (<b>a</b>) the 1st iteration of non-restricted PSD; (<b>b</b>) the 2nd iteration for restricted PSD; (<b>c</b>) the 3rd iteration. Since there is no discontinuity, iteration is terminated.</p>
Full article ">
16 pages, 11407 KiB  
Article
YOLOv8-LCNET: An Improved YOLOv8 Automatic Crater Detection Algorithm and Application in the Chang’e-6 Landing Area
by Jing Nan, Yexin Wang, Kaichang Di, Bin Xie, Chenxu Zhao, Biao Wang, Shujuan Sun, Xiangjin Deng, Hong Zhang and Ruiqing Sheng
Sensors 2025, 25(1), 243; https://doi.org/10.3390/s25010243 - 3 Jan 2025
Abstract
The Chang’e-6 (CE-6) landing area on the far side of the Moon is located in the southern part of the Apollo basin within the South Pole–Aitken (SPA) basin. The statistical analysis of impact craters in this region is crucial for ensuring a safe [...] Read more.
The Chang’e-6 (CE-6) landing area on the far side of the Moon is located in the southern part of the Apollo basin within the South Pole–Aitken (SPA) basin. The statistical analysis of impact craters in this region is crucial for ensuring a safe landing and supporting geological research. Aiming at existing impact crater identification problems such as complex background, low identification accuracy, and high computational costs, an efficient impact crater automatic detection model named YOLOv8-LCNET (YOLOv8-Lunar Crater Net) based on the YOLOv8 network is proposed. The model first incorporated a Partial Self-Attention (PSA) mechanism at the end of the Backbone, allowing the model to enhance global perception and reduce missed detections with a low computational cost. Then, a Gather-and-Distribute mechanism (GD) was integrated into the Neck, enabling the model to fully fuse multi-level feature information and capture global information, enhancing the model’s ability to detect impact craters of various sizes. The experimental results showed that the YOLOv8-LCNET model performs well in the impact crater detection task, achieving 87.7% Precision, 84.3% Recall, and 92% AP, which were 24.7%, 32.7%, and 37.3% higher than the original YOLOv8 model. The improved YOLOv8 model was then used for automatic crater detection in the CE-6 landing area (246 km × 135 km, with a DOM resolution of 3 m/pixel), resulting in a total of 770,671 craters, ranging from 13 m to 19,882 m in diameter. The analysis of this impact crater catalogue has provided critical support for landing site selection and characterization of the CE-6 mission and lays the foundation for future lunar geological studies. Full article
Show Figures

Figure 1

Figure 1
<p>CE-6 landing area DOM with a resolution of 3 m/pixel.</p>
Full article ">Figure 2
<p>The structure of YOLOv8-LCNET. {B2, B3, B4, B5}, {P3, P4, P5}, and {N3, N4, N5} denote feature maps.</p>
Full article ">Figure 3
<p>Some sample results of impact crater extraction from two local DOM mosaics.</p>
Full article ">Figure 4
<p>Comparison of the predicted impact crater detection results using YOLOv8 and YOLOv8-LCNET algorithms. (<b>a</b>) Ground-Truth, (<b>b</b>) YOLOv8, (<b>c</b>) YOLOv8-LCNET.</p>
Full article ">Figure 5
<p>Comparison of the impact crater detection in a randomly selected area in SHP format.</p>
Full article ">Figure 6
<p>Comparison of the crater detection algorithms in different areas. The yellow circles stand for TP, the green circles stand for FP, which correspond to newly discovered unlabeled craters, and the red circles stand for FN. (<b>a</b>) Background, (<b>b</b>) Ground-Truth, (<b>c</b>) YOLOv8-LCNET, (<b>d</b>) Wang et al. [<a href="#B48-sensors-25-00243" class="html-bibr">48</a>], (<b>e</b>) Xie et al. [<a href="#B49-sensors-25-00243" class="html-bibr">49</a>].</p>
Full article ">Figure 7
<p>Impact craters with diameters greater than 120 m extracted from the CE-6 landing area, with subfigures showing the algorithm’s performance across four evenly distributed regions (<b>A</b>–<b>D</b>) and a detailed view of craters in a 5.8 km × 7.3 km area near the landing point.</p>
Full article ">Figure 8
<p>Size distribution of the impact crater diameters in the CE-6 landing area.</p>
Full article ">Figure 9
<p>The size–frequency distributions of the impact craters in the mare region with the internal diameter of <math display="inline"><semantics> <mrow> <msqrt> <mn>2</mn> </msqrt> </mrow> </semantics></math>D in a log–log plot. (<b>a</b>) The cumulative size–frequency distribution (CSFD) of craters [<a href="#B51-sensors-25-00243" class="html-bibr">51</a>]. (<b>b</b>) The incremental size–frequency distributions (ISFD) of craters [<a href="#B52-sensors-25-00243" class="html-bibr">52</a>]. (<b>c</b>) The ISFD established by robust kernel density estimation [<a href="#B53-sensors-25-00243" class="html-bibr">53</a>].</p>
Full article ">Figure 10
<p>The size–frequency distributions of the impact craters in the highland region with the internal diameter of <math display="inline"><semantics> <mrow> <msqrt> <mn>2</mn> </msqrt> </mrow> </semantics></math>D in a log–log plot. (<b>a</b>) The CSFD of craters [<a href="#B51-sensors-25-00243" class="html-bibr">51</a>]. (<b>b</b>) The ISFD of craters [<a href="#B52-sensors-25-00243" class="html-bibr">52</a>]. (<b>c</b>) The ISFD established by robust kernel density estimation [<a href="#B53-sensors-25-00243" class="html-bibr">53</a>].</p>
Full article ">
39 pages, 7214 KiB  
Article
A Deep Learning-Based Approach for the Detection of Various Internet of Things Intrusion Attacks Through Optical Networks
by Nouman Imtiaz, Abdul Wahid, Syed Zain Ul Abideen, Mian Muhammad Kamal, Nabila Sehito, Salahuddin Khan, Bal S. Virdee, Lida Kouhalvandi and Mohammad Alibakhshikenari
Photonics 2025, 12(1), 35; https://doi.org/10.3390/photonics12010035 - 3 Jan 2025
Abstract
The widespread use of the Internet of Things (IoT) has led to significant breakthroughs in various fields but has also exposed critical vulnerabilities to evolving cybersecurity threats. Current Intrusion Detection Systems (IDSs) often fail to provide real-time detection, scalability, and interpretability, particularly in [...] Read more.
The widespread use of the Internet of Things (IoT) has led to significant breakthroughs in various fields but has also exposed critical vulnerabilities to evolving cybersecurity threats. Current Intrusion Detection Systems (IDSs) often fail to provide real-time detection, scalability, and interpretability, particularly in high-speed optical network environments. This research introduces XIoT, which is a novel explainable IoT attack detection model designed to address these challenges. Leveraging advanced deep learning methods, specifically Convolutional Neural Networks (CNNs), XIoT analyzes spectrogram images transformed from IoT network traffic data to detect subtle and complex attack patterns. Unlike traditional approaches, XIoT emphasizes interpretability by integrating explainable AI mechanisms, enabling cybersecurity analysts to understand and trust its predictions. By offering actionable insights into the factors driving its decision making, XIoT supports informed responses to cyber threats. Furthermore, the model’s architecture leverages the high-speed, low-latency characteristics of optical networks, ensuring the efficient processing of large-scale IoT data streams and supporting real-time detection in diverse IoT ecosystems. Comprehensive experiments on benchmark datasets, including KDD CUP99, UNSW NB15, and Bot-IoT, demonstrate XIoT’s exceptional accuracy rates of 99.34%, 99.61%, and 99.21%, respectively, significantly surpassing existing methods in both accuracy and interpretability. These results highlight XIoT’s capability to enhance IoT security by addressing real-world challenges, ensuring robust, scalable, and interpretable protection for IoT networks against sophisticated cyber threats. Full article
(This article belongs to the Special Issue Optical Wireless Communication in 5G and Beyond)
19 pages, 1864 KiB  
Article
An FPGA-Based SiNW-FET Biosensing System for Real-Time Viral Detection: Hardware Amplification and 1D CNN for Adaptive Noise Reduction
by Ahmed Hadded, Mossaad Ben Ayed and Shaya A. Alshaya
Sensors 2025, 25(1), 236; https://doi.org/10.3390/s25010236 - 3 Jan 2025
Abstract
Impedance-based biosensing has emerged as a critical technology for high-sensitivity biomolecular detection, yet traditional approaches often rely on bulky, costly impedance analyzers, limiting their portability and usability in point-of-care applications. Addressing these limitations, this paper proposes an advanced biosensing system integrating a Silicon [...] Read more.
Impedance-based biosensing has emerged as a critical technology for high-sensitivity biomolecular detection, yet traditional approaches often rely on bulky, costly impedance analyzers, limiting their portability and usability in point-of-care applications. Addressing these limitations, this paper proposes an advanced biosensing system integrating a Silicon Nanowire Field-Effect Transistor (SiNW-FET) biosensor with a high-gain amplification circuit and a 1D Convolutional Neural Network (CNN) implemented on FPGA hardware. This attempt combines SiNW-FET biosensing technology with FPGA-implemented deep learning noise reduction, creating a compact system capable of real-time viral detection with minimal computational latency. The integration of a 1D CNN model on FPGA hardware for adaptive, non-linear noise filtering sets this design apart from conventional filtering approaches by achieving high accuracy and low power consumption in a portable format. This integration of SiNW-FET with FPGA-based CNN noise reduction offers a unique approach, as prior noise reduction techniques for biosensors typically rely on linear filtering or digital smoothing, which lack adaptive capabilities for complex, non-linear noise patterns. By introducing the 1D CNN on FPGA, this architecture enables real-time, high-fidelity noise reduction, preserving critical signal characteristics without compromising processing speed. Notably, the findings presented in this work are based exclusively on comprehensive simulations using COMSOL and MATLAB, as no physical prototypes or biomarker detection experiments were conducted. The SiNW-FET biosensor, functionalized with antibodies specific to viral antigens, detects impedance shifts caused by antibody–antigen interactions, providing a highly sensitive platform for viral detection. A high-gain folded-cascade amplifier enhances the Signal-to-Noise Ratio (SNR) to approximately 70 dB, verified through COMSOL and MATLAB simulations. Additionally, a 1D CNN model is employed for adaptive noise reduction, filtering out non-linear noise patterns and achieving an approximate 75% noise reduction across a broad frequency range. The CNN model, implemented on an Altera DE2 FPGA, enables high-throughput, low-latency signal processing, making the system viable for real-time applications. Performance evaluations confirmed the proposed system’s capability to enhance the SNR significantly while maintaining a compact and energy-efficient design suitable for portable diagnostics. This integrated architecture thus provides a powerful solution for high-precision, real-time viral detection, and continuous health monitoring, advancing the role of biosensors in accessible point-of-care diagnostics. Full article
(This article belongs to the Special Issue Advanced Sensor Technologies for Biomedical-Information Processing)
Show Figures

Figure 1

Figure 1
<p>Complete sensor architecture.</p>
Full article ">Figure 2
<p>SNR distributions before and after noise reduction.</p>
Full article ">Figure 3
<p>The proposed 1D CNN architecture.</p>
Full article ">Figure 4
<p>Comparative SNR improvement across noise reduction techniques for SiNW-FET biosensor signals.</p>
Full article ">Figure 5
<p>FPGA-based 1D CNN accelerator design.</p>
Full article ">
10 pages, 2766 KiB  
Proceeding Paper
Advancement of Electrospun Carbon Nanofiber Mats in Sensor Technology for Air Pollutant Detection
by Al Mamun, Mohamed Kiari, Abdelghani Benyoucef and Lilia Sabantina
Eng. Proc. 2024, 67(1), 82; https://doi.org/10.3390/engproc2024067082 (registering DOI) - 3 Jan 2025
Abstract
The use of electrospun carbon nanofibers (ECNs) has been the focus of considerable interest due to their potential implementation in sensing. These ECNs have unique structural and morphological features such as high surface area-to-volume ratio, cross-linked pore structure, and good conductivity, making them [...] Read more.
The use of electrospun carbon nanofibers (ECNs) has been the focus of considerable interest due to their potential implementation in sensing. These ECNs have unique structural and morphological features such as high surface area-to-volume ratio, cross-linked pore structure, and good conductivity, making them well suited for sensing applications. Electrospinning technology, in which polymer solutions or melts are electrostatically deposited, enables the production of high-performance nanofibers with tailored properties, including fiber diameter, porosity, and composition. This controllability enables the use of ECNs to optimize sensing applications, resulting in improved sensor performance and sensitivity. While carbon nanofiber mats have potential for sensor applications, several challenges remain to improve selectivity, sensitivity, stability and scalability. Sensor technologies play a critical role in the global sharing of environmental data, facilitating collaboration to address transboundary pollution issues and fostering international cooperation to find solutions to common environmental challenges. The use of carbon nanofibers for the detection of air pollutants offers a variety of possibilities for industrial applications in different sectors, ranging from healthcare to materials science. For example, optical, piezoelectric and resistive ECNs sensors effectively monitor particulate matter, while chemoresistive and catalytic ECNs sensors are particularly good at detecting gaseous pollutants. For heavy metals, electrochemical ECNF sensors offer accurate and reliable detection. This brief review provides in-sights into the latest developments and findings in the fabrication, properties and applications of ECNs in the field of sensing. The efficient utilization of these resources holds significant potential for meeting the evolving needs of sensing technologies in various fields, with a particular focus on air pollutant detection. Full article
(This article belongs to the Proceedings of The 3rd International Electronic Conference on Processes)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Atomic force microscopy (AFM) image of magnetic electrospun nanofiber mat. The scale bar shows 5 μm; (<b>b</b>) confocal laser scanning microscope (CLSM) image showing the PAN/gelatin nanofiber mats on a 3D-printed sample. The scale indicates 50 μm.</p>
Full article ">Figure 2
<p>Schematic of experimental setup for the fabrication of ZnO-MWCNT nanocomposite sensor and its ammonia gas sensing properties at room temperature. Reproduced from Ref. [<a href="#B56-engproc-67-00082" class="html-bibr">56</a>], originally published under a CC-BY license.</p>
Full article ">
16 pages, 5868 KiB  
Article
A Deep Learning-Based Approach for Precise Emotion Recognition in Domestic Animals Using EfficientNetB5 Architecture
by Rashadul Islam Sumon, Haider Ali, Salma Akter, Shah Muhammad Imtiyaj Uddin, Md Ariful Islam Mozumder and Hee-Cheol Kim
Eng 2025, 6(1), 9; https://doi.org/10.3390/eng6010009 - 3 Jan 2025
Abstract
The perception of animal emotions is key to enhancing veterinary practice, human–animal interactions, and protecting domesticated species’ welfare. This study presents a unique emotion classification deep learning-based approach for pet animals. The actual and emotional status of dogs and cats have been classified [...] Read more.
The perception of animal emotions is key to enhancing veterinary practice, human–animal interactions, and protecting domesticated species’ welfare. This study presents a unique emotion classification deep learning-based approach for pet animals. The actual and emotional status of dogs and cats have been classified using a modified EfficientNetB5 model. Utilizing a dataset of images classified into four different emotion categories—angry, sad, happy, and neutral—the model incorporates sophisticated feature extraction methods, such as Dense Residual Blocks and Squeeze-and-Excitation (SE) blocks, to improve the focus on important emotional indicators. The basis of the second strategy is EfficientNetB5, which is known for providing an optimal balance in terms of accuracy and processing capabilities. The model exhibited robust generalization abilities for the subtle identification of emotional states, achieving 98.2% accuracy in training and 91.24% during validation on a separate dataset. These encouraging outcomes support the model’s promise for real-time emotion detection applications and demonstrate its adaptability for wider application in ongoing pet monitoring systems. The dataset will be enlarged, model performance will be enhanced for more species, and real-time capabilities will be developed for real-world implementation. Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications)
Show Figures

Figure 1

Figure 1
<p>Flow diagram of pet animal face emotion detection.</p>
Full article ">Figure 2
<p>Categorical images that show the four emotional states of domestic animals. Images in the first row are categorized as Angry, in the second row as Neutral/Other, in the third row as Sad, and in the fourth row as Happy.</p>
Full article ">Figure 3
<p>The left panel displays the original images, while the right panel displays the corresponding pre-processed images after applying the noise and blur reduction algorithms.</p>
Full article ">Figure 4
<p>Backbone architecture of EfficientNetB5 model for pet emotion classification.</p>
Full article ">Figure 5
<p>Squeeze-and-Excitation module architecture.</p>
Full article ">Figure 6
<p>Dense Residual Block architecture.</p>
Full article ">Figure 7
<p>Comparison training and testing accuracy of (<b>a</b>) Mobile Net, (<b>b</b>) VGG-16, (<b>c</b>) Inception V3, (<b>d</b>) Alex Net, (<b>e</b>) Exception, (<b>f</b>) Dense Net, (<b>g</b>) Res Net-50, and (<b>h</b>) Proposed Modified EfficientNetB5.</p>
Full article ">Figure 8
<p>Comparison confusion matrices of (<b>a</b>) Mobile Net, (<b>b</b>) VGG-16, (<b>c</b>) Inception V3, (<b>d</b>) Alex Net, (<b>e</b>) Exception, (<b>f</b>) Dense Net, (<b>g</b>) Res Net-50, and (<b>h</b>) Proposed Modified EfficientNetB5.</p>
Full article ">Figure 9
<p>Prediction results of proposed Modified EfficientNetB5 model.</p>
Full article ">
21 pages, 10322 KiB  
Article
Development of Automated Image Processing for High-Throughput Screening of Potential Anti-Chikungunya Virus Compounds
by Pathaphon Wiriwithya, Siwaporn Boonyasuppayakorn, Pattadon Sawetpiyakul, Duangpron Peypala and Gridsada Phanomchoeng
Appl. Sci. 2025, 15(1), 385; https://doi.org/10.3390/app15010385 - 3 Jan 2025
Abstract
Chikungunya virus, a member of the Alphavirus genus, continues to present a global health challenge due to its widespread occurrence and the absence of specific antiviral therapies. Accurate detection of viral infections, such as chikungunya, is critical for antiviral research, yet traditional methods [...] Read more.
Chikungunya virus, a member of the Alphavirus genus, continues to present a global health challenge due to its widespread occurrence and the absence of specific antiviral therapies. Accurate detection of viral infections, such as chikungunya, is critical for antiviral research, yet traditional methods are time-consuming and prone to error. This study presents the development and validation of an automated image processing algorithm designed to improve the accuracy and speed of high-throughput screening for potential anti-chikungunya virus compounds. Using MvTec Halcon software (Version 22.11), the algorithm was developed to detect and classify infected and uninfected cells in viral assays, and its performance was validated against manual counts conducted by virology experts, showing a strong correlation with Pearson correlation coefficients of 0.9807 for cell detection and 0.9886 for virus detection. These values indicate a high correlation between the algorithm and manual counts performed by three virology experts, demonstrating that the algorithm’s accuracy closely matches expert manual evaluations. Following statistical validation, the algorithm was applied to screen antiviral compounds, demonstrating its effectiveness in enhancing the throughput and accuracy of drug discovery workflows. This technology can be seamlessly integrated into existing virological research pipelines, offering a scalable and efficient tool to accelerate drug discovery and improve diagnostic workflows for vector-borne and emerging viral diseases. By addressing critical bottlenecks in speed and accuracy, it holds promise for tackling global virology challenges and advancing research into other viral infections. Full article
(This article belongs to the Special Issue Digital Image Processing: Technologies and Applications)
Show Figures

Figure 1

Figure 1
<p>Screenshot of MvTec Halcon software showing the cell detection algorithm, with segmented cell regions and program code for image acquisition and analysis.</p>
Full article ">Figure 2
<p>An image captured from a 96-well plate. (<b>a</b>) shows the entire well, providing a view of the overall cell distribution. (<b>b</b>) is a zoomed-in portion of the well, offering a closer look at the cellular arrangement. (<b>c</b>) presents a cropped section from the zoomed image, highlighting the detailed structure of individual cells for enhanced visualization.</p>
Full article ">Figure 3
<p>An image captured from a 96-well plate. (<b>a</b>) shows the entire well, providing a view of the overall infected cell distribution. (<b>b</b>) is a zoomed-in portion of the well, offering a closer look at the virus arrangement. (<b>c</b>) presents a cropped section from the zoomed image, highlighting the detailed structure of individual virus for enhanced visualization.</p>
Full article ">Figure 4
<p>Flowchart illustrating the steps of the cell detection algorithm.</p>
Full article ">Figure 5
<p>Flowchart illustrating the steps of the Virus detection algorithm.</p>
Full article ">Figure 6
<p>(<b>a</b>) Manual counting by expert, (<b>b</b>) algorithm cell counting, (<b>c</b>) algorithm virus counting.</p>
Full article ">Figure 7
<p>Linear regression graphs show the results of the linearity, correlation coefficient, and R-squared values. (<b>a</b>) Linear regression graph comparing the cell counting algorithm with expert manual counts, and (<b>b</b>) linear regression graph comparing the virus counting algorithm with expert manual counts.</p>
Full article ">Figure 8
<p>(<b>a</b>) Bland–Altman plot for the cell counting algorithm, and (<b>b</b>) Bland–Altman plot for the virus counting algorithm, illustrating the agreement between the algorithm and expert counts.</p>
Full article ">Figure 9
<p>Box plots comparing the algorithm performance to expert counts. (<b>a</b>) Box plot for the cell counting algorithm, and (<b>b</b>) box plot for the virus counting algorithm, assessing whether the algorithms fall within the range of expert counts.</p>
Full article ">Figure 10
<p>Merged fluorescence images of cells infected with the chikungunya virus (CHIKV) at an MOI of 0.5, incubated for 1 day. Rows (A–D) and columns (1–12) represent different sample wells, with specific conditions or treatments labeled above the wells.</p>
Full article ">Figure 11
<p>Scatter plot illustrating the relationship between % cell viability and % remaining virus for each inhibitor substance, showing the impact of various inhibitors on both cell survival and virus presence.</p>
Full article ">
19 pages, 3749 KiB  
Article
Advanced Data Framework for Sleep Medicine Applications: Machine Learning-Based Detection of Sleep Apnea Events
by Kristina Zovko, Yann Sadowski, Toni Perković, Petar Šolić, Ivana Pavlinac Dodig, Renata Pecotić and Zoran Đogaš
Appl. Sci. 2025, 15(1), 376; https://doi.org/10.3390/app15010376 - 3 Jan 2025
Viewed by 103
Abstract
Obstructive Sleep Apnea (OSA) is a prevalent condition that disrupts sleep quality and contributes to significant health risks, necessitating accurate and efficient diagnostic methods. This study introduces a machine learning-based framework aimed at detecting apnea events through analysis of polysomnographic (PSG) and oximetry [...] Read more.
Obstructive Sleep Apnea (OSA) is a prevalent condition that disrupts sleep quality and contributes to significant health risks, necessitating accurate and efficient diagnostic methods. This study introduces a machine learning-based framework aimed at detecting apnea events through analysis of polysomnographic (PSG) and oximetry data. The core component is a Long Short-Term Memory (LSTM) network, which is particularly suited to processing sequential time-series data, capturing complex temporal relationships within physiological signals such as oxygen saturation, heart rate, and airflow. Through extensive feature engineering and preprocessing, the framework optimizes data representation by normalizing, scaling, and encoding input features to enhance computational efficiency and model performance. Key results demonstrate the model’s effectiveness, achieving a balanced accuracy of 79%, precision of 68%, and recall of 76% on the test dataset, with validation set metrics similarly high, supporting the model’s ability to generalize effectively. Comprehensive hyperparameter tuning further contributed to a stable, robust architecture capable of accurately identifying apnea events, providing clinicians with a valuable tool for the early detection and tailored management of OSA. This data-driven framework offers an efficient, reliable solution for OSA diagnostics with the potential to improve clinical decision making and patient outcomes. Full article
(This article belongs to the Special Issue Human Activity Recognition (HAR) in Healthcare, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>System architecture: from data collection by recording patients to visualization through a web application, which provides doctors with a more interactive approach to analyzing the collected data.</p>
Full article ">Figure 2
<p>The web application interface is developed using React.js. In the menu, the user inputs the patient ID and sends a request, which is handled by Node.js. The request triggers the execution of Python code to generate graphs as shown in the image.</p>
Full article ">Figure 3
<p>Distribution of apnoeas event by sleep stage for a single patient.</p>
Full article ">Figure 4
<p>SpO2 with label of one patient, 1 denoting detected apnea, and 0 no apneic event.</p>
Full article ">Figure 5
<p>Architecture of the used model.</p>
Full article ">Figure 6
<p>Confusion matrix of the final model on test set.</p>
Full article ">Figure 7
<p>Confusion matrix of the final model on validation set.</p>
Full article ">Figure 8
<p>SpO2 of patient X with confusion matrix labels.</p>
Full article ">Figure 9
<p>SpO2 of patient Y with confusion matrix labels.</p>
Full article ">Figure 10
<p>SpO2 of patient Z with confusion matrix labels.</p>
Full article ">Figure A1
<p>Long Short-Term Memory (LSTM) cell [<a href="#B28-applsci-15-00376" class="html-bibr">28</a>].</p>
Full article ">
22 pages, 6880 KiB  
Article
MonoSeg: An Infrared UAV Perspective Vehicle Instance Segmentation Model with Strong Adaptability and Integrity
by Peng Huang, Yan Yin, Kaifeng Hu and Weidong Yang
Sensors 2025, 25(1), 225; https://doi.org/10.3390/s25010225 - 3 Jan 2025
Viewed by 106
Abstract
Despite rapid progress in UAV-based infrared vehicle detection, achieving reliable target recognition remains challenging due to dynamic viewpoint variations and platform instability. The inherent limitations of infrared imaging, particularly low contrast ratios and thermal crossover effects, significantly compromise detection accuracy. Moreover, the computational [...] Read more.
Despite rapid progress in UAV-based infrared vehicle detection, achieving reliable target recognition remains challenging due to dynamic viewpoint variations and platform instability. The inherent limitations of infrared imaging, particularly low contrast ratios and thermal crossover effects, significantly compromise detection accuracy. Moreover, the computational constraints of edge computing platforms pose a fundamental challenge in balancing real-time processing requirements with detection performance. Here, we present MonoSeg, a novel instance segmentation framework optimized for UAV perspective infrared vehicle detection. Our approach introduces three key innovations: (1) the Ghost Feature Bottle Cross module (GFBC), which enhances backbone feature extraction efficiency while significantly reducing computational over-head; (2) the Scale Feature Recombination module (SFR), which optimizes feature selection in the Neck stage through adaptive multi-scale fusion; and (3) Comprehensive Loss function that enforces precise instance boundary delineation. Extensive experimental evaluation on bench-mark datasets demonstrates that MonoSeg achieves state-of-the-art performance across standard metrics, including Box mAP and Mask mAP, while maintaining substantially lower computational requirements compared to existing methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Examples from the four image datasets: (<b>a</b>–<b>c</b>) publicly available datasets and (<b>d</b>) data we gathered ourselves.</p>
Full article ">Figure 2
<p>Illustration of SAM large model-assisted annotation [<a href="#B10-sensors-25-00225" class="html-bibr">10</a>].</p>
Full article ">Figure 3
<p>DIVIS dataset statistical properties. (<b>a</b>) The histogram of the area distribution of all instances. (<b>b</b>) The histogram of the ratio of the sum of contour areas to the image area. (<b>c</b>) A scatter plot of the aspect ratio of instance bounding boxes versus the ratio of instance contour area. (<b>d</b>) A scatter plot of the ratio of instance contour area to instance bounding box area against the instance area.</p>
Full article ">Figure 4
<p>Architecture diagram of the MonoSeg model.</p>
Full article ">Figure 5
<p>Architecture diagram of the Ghost Feature Bottle Cross module (GFBC) structure.</p>
Full article ">Figure 6
<p>Architecture diagram of the Scale Feature Recombination module (SFR) structure.</p>
Full article ">Figure 7
<p>Scatter plot of model inference frame rate vs. Mask0.5 mAP.</p>
Full article ">Figure 8
<p>Bar chart comparing Box and Mask mAP across multiple thresholds.</p>
Full article ">Figure 9
<p>Two classes of target Box and Mask P–R curves.</p>
Full article ">Figure 9 Cont.
<p>Two classes of target Box and Mask P–R curves.</p>
Full article ">Figure 10
<p>Visualization examples of comparative experimental results. (<b>1</b>) and (<b>2</b>) represent the segmentation results of the two images respectively, (<b>a</b>–<b>h</b>) represents each method, and (<b>i</b>) represents GT. Red indicates a small vehicle and green indicates a large vehicle.</p>
Full article ">Figure 10 Cont.
<p>Visualization examples of comparative experimental results. (<b>1</b>) and (<b>2</b>) represent the segmentation results of the two images respectively, (<b>a</b>–<b>h</b>) represents each method, and (<b>i</b>) represents GT. Red indicates a small vehicle and green indicates a large vehicle.</p>
Full article ">Figure 11
<p>Visualization comparison of heatmaps between baseline and MonoSeg algorithms. (<b>1</b>–<b>3</b>) represents the segmentation results of the three images respectively; (<b>a</b>,<b>b</b>) represent the heatmap of YOLOv8 and MonoSeg respectively; (<b>c</b>) represents GT. Red indicates a small vehicle and green indicates a large vehicle.</p>
Full article ">
21 pages, 5930 KiB  
Article
Sustainable Valorization of Rice Straw into Biochar and Carbon Dots Using a Novel One-Pot Approach for Dual Applications in Detection and Removal of Lead Ions
by Jagpreet Singh, Monika Bhattu, Meenakshi Verma, Mikhael Bechelany, Satinder Kaur Brar and Rajendrasinh Jadeja
Nanomaterials 2025, 15(1), 66; https://doi.org/10.3390/nano15010066 - 3 Jan 2025
Viewed by 106
Abstract
Lead (Pb) is a highly toxic heavy metal that causes significant health hazards and environmental damage. Thus, the detection and removal of Pb2+ ions in freshwater sources are imperative for safeguarding public health and the environment. Moreover, the transformation of single resources [...] Read more.
Lead (Pb) is a highly toxic heavy metal that causes significant health hazards and environmental damage. Thus, the detection and removal of Pb2+ ions in freshwater sources are imperative for safeguarding public health and the environment. Moreover, the transformation of single resources into multiple high-value products is vital for achieving sustainable development goals (SDGs). In this regard, the present work focused on the preparation of two efficient materials, i.e., biochar (R-BC) and carbon dots (R-CDs) from a single resource (rice straw), via a novel approach by using extraction and hydrothermal process. The various microscopic and spectroscopy techniques confirmed the formation of porous structure and spherical morphology of R-BC and R-CDs, respectively. FTIR analysis confirmed the presence of hydroxyl (–OH), carboxyl (–COO) and amine (N–H) groups on the R-CDs’ surface. The obtained blue luminescent R-CDs were employed as chemosensors for the detection of Pb2+ ions. The sensor exhibited a strong linear correlation over a concentration range of 1 µM to 100 µM, with a limit of detection (LOD) of 0.11 µM. Furthermore, the BET analysis of R-BC indicated a surface area of 1.71 m2/g and a monolayer volume of 0.0081 cm3/g, supporting its adsorption potential for Pb2+. The R-BC showed excellent removal efficiency of 77.61%. The adsorption process followed the Langmuir isotherm model and second-order kinetics. Therefore, the dual use of rice straw-derived provides a cost-effective, environmentally friendly solution for Pb2+ detection and remediation to accomplish the SDGs. Full article
Show Figures

Figure 1

Figure 1
<p>Stepwise process of biochar and carbon dots synthesis.</p>
Full article ">Figure 2
<p>Morphological and elemental analysis of Biochar: (<b>a</b>,<b>b</b>) SEM images, (<b>c</b>) EDX analysis spectrum, (<b>d</b>) XRD pattern.</p>
Full article ">Figure 3
<p>(<b>a</b>) N<sub>2</sub> adsorption–desorption isotherm of biochar; (<b>b</b>) BET curve illustrating the surface area and monolayer volume of biochar.</p>
Full article ">Figure 4
<p>Microscopic and surface chemistry analysis of R-CDs: (<b>a</b>,<b>b</b>) TEM images of R-CDs illustrating the spherical-shaped R-CDs, (<b>c</b>) Size distribution histogram illustrating the average size of R-CDs from 8–10 nm, and (<b>d</b>) FTIR spectra to explore the surface functionality.</p>
Full article ">Figure 5
<p>(<b>a</b>) Absorption spectra of R-CDs; (<b>b</b>) fluorescence spectra of synthesized R-CDs exhibiting an emission band at 430 nm; (<b>c</b>) Excitation-dependent emissive fluorescence profile of R-CDs exhibiting a red shift in λ<sub>em</sub>; (<b>d</b>) Screening of R-CDs against various heavy metals illustrating a high quench in the presence of Pb<sup>2+</sup> and the other metal ions do not exhibit any effect on the fluorescence behaviour of CDs.</p>
Full article ">Figure 6
<p>(<b>a</b>) Declination in fluorescence spectra of R-CDs on titration with Pb<sup>2+</sup> (1 µM–100 µM); (<b>b</b>) Linear decreasing response of R-CDs towards the subsequential increase in the Pb<sup>2+</sup> over 1 µM–100 µM; (<b>c</b>) Stern Volmer Plot of R-CDs towards the detection of Pb<sup>2+</sup>; (<b>d</b>) Interference studies of other potential competing ions for R-CDs towards Pb<sup>2+</sup>.</p>
Full article ">Figure 7
<p>Illustration of the chelation between R-CDs and Pb<sup>2+</sup> via the formation of coordination bonds between the lone pairs present on the surface functional groups with the electron-deficient Pb<sup>2+</sup>.</p>
Full article ">Figure 8
<p>Point of zero charge study for R-BC.</p>
Full article ">Figure 9
<p>(<b>a</b>) Illustration of decrease in Pb<sup>2+</sup> concentration with time on the addition of biochar-BC; (<b>b</b>,<b>c</b>) Trend in Pb<sup>2+</sup> removal efficiency with time.</p>
Full article ">Figure 10
<p>(<b>a</b>) Illustration of decrease in Pb<sup>2+</sup> concentration with time on the addition of R-BC; (<b>b</b>) Graphical representation for Langmuir adsorption isotherm, Freundlich adsorption isotherm, Temkin isotherm model, DR Adsorption model, and Sips isotherm model.</p>
Full article ">Figure 11
<p>Graphical representation for PFOM (<b>a</b>), PSOM (<b>b</b>), Intraparticle diffusion model (<b>c</b>), and Elovich model (<b>d</b>).</p>
Full article ">
13 pages, 2999 KiB  
Communication
Bayesian Adaptive Detection for Distributed MIMO Radar with Insufficient Training Data
by Hongli Li, Ming Liu, Chunhe Chang, Binbin Li, Bilei Zhou, Hao Chen and Weijian Liu
Electronics 2025, 14(1), 164; https://doi.org/10.3390/electronics14010164 - 3 Jan 2025
Viewed by 159
Abstract
The distributed multiple-input multiple-output (MIMO) radar observes targets from different angles, which can overcome the adverse effects of target glint and avoid the situation where the target’s tangential flight cannot be effectively detected by the radar, thus providing great advantages in target detection. [...] Read more.
The distributed multiple-input multiple-output (MIMO) radar observes targets from different angles, which can overcome the adverse effects of target glint and avoid the situation where the target’s tangential flight cannot be effectively detected by the radar, thus providing great advantages in target detection. However, distributed MIMO often encounters a scarcity of training samples for target detection. To overcome this difficulty, this paper proposes a Bayesian approach. By modeling the target signal as a subspace signal, where each transmit–receive pair possesses a distinct and unknown covariance matrix governed by an inverse Wishart distribution, three efficient detectors are devised based on the generalized likelihood ratio test (GLRT), Rao, and Wald criteria. Comparative analysis with existing detectors reveals that the proposed Bayesian detectors exhibit superior performance, particularly in scenarios with limited training data. Experimental results demonstrate that the Bayesian GLRT achieves the highest probability of detection (PD), outperforming conventional detectors by requiring a reduction in signal-to-noise ratio (SNR). Furthermore, an increase in the degrees of freedom of the inverse Wishart distribution and the number of receiving antennas enhances detection performance, albeit at the cost of increased hardware requirements. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

Figure 1
<p>PFAs of the detectors under different clutter structure. <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>{</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>}</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>PDs of the detectors under different SNRs. <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>μ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mo>{</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>11</mn> <mo>,</mo> <mn>12</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>=</mo> <mo>{</mo> <mn>0.8</mn> <mo>,</mo> <mn>0.86</mn> <mo>,</mo> <mn>0.92</mn> <mo>,</mo> <mn>0.98</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>{</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>}</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>PDs of the detectors under different SNRs. <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>μ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mo>{</mo> <mn>17</mn> <mo>,</mo> <mn>18</mn> <mo>,</mo> <mn>19</mn> <mo>,</mo> <mn>20</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>=</mo> <mo>{</mo> <mn>0.8</mn> <mo>,</mo> <mn>0.86</mn> <mo>,</mo> <mn>0.92</mn> <mo>,</mo> <mn>0.98</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>{</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>}</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>PDs of the detectors under different SNRs. <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>μ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mo>{</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>11</mn> <mo>,</mo> <mn>12</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>=</mo> <mo>{</mo> <mn>0.8</mn> <mo>,</mo> <mn>0.86</mn> <mo>,</mo> <mn>0.92</mn> <mo>,</mo> <mn>0.98</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>{</mo> <mn>15</mn> <mo>,</mo> <mn>16</mn> <mo>,</mo> <mn>15</mn> <mo>,</mo> <mn>16</mn> <mo>}</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>PDs of the detectors under different SNRs. <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>μ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mo>{</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>11</mn> <mo>,</mo> <mn>12</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>=</mo> <mo>{</mo> <mn>0.8</mn> <mo>,</mo> <mn>0.86</mn> <mo>,</mo> <mn>0.92</mn> <mo>,</mo> <mn>0.98</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>{</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>PDs of the detectors under different SNRs. <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>μ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mo>{</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>11</mn> <mo>,</mo> <mn>12</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>=</mo> <mo>{</mo> <mn>0.8</mn> <mo>,</mo> <mn>0.86</mn> <mo>,</mo> <mn>0.92</mn> <mo>,</mo> <mn>0.98</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>{</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>}</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>PDs of the detectors under different SNRs. <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>μ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mo>{</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>11</mn> <mo>,</mo> <mn>12</mn> <mo>,</mo> <mn>13</mn> <mo>,</mo> <mn>14</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>ρ</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>=</mo> <mo>{</mo> <mn>0.8</mn> <mo>,</mo> <mn>0.836</mn> <mo>,</mo> <mn>0.872</mn> <mo>,</mo> <mn>0.908</mn> <mo>,</mo> <mn>0.98</mn> <mo>}</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mo>{</mo> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>n</mi> </mrow> </msub> <mo>}</mo> </mrow> <mrow> <mi>m</mi> <mo>,</mo> <mi>n</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <mo>{</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>9</mn> <mo>,</mo> <mn>10</mn> <mo>}</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">
19 pages, 26867 KiB  
Article
Lipid Biomarkers in Urban Soils of the Alluvial Area near Sava River, Belgrade, Serbia
by Gordana Dević, Sandra Bulatović, Jelena Avdalović, Nenad Marić, Jelena Milić, Mila Ilić and Tatjana Šolević Knudsen
Molecules 2025, 30(1), 154; https://doi.org/10.3390/molecules30010154 - 3 Jan 2025
Viewed by 247
Abstract
This study focused on the investigation of soil samples from the alluvial zone of the Sava River, located near the heating plant in New Belgrade, Serbia. Using gas chromatography with flame ionization detection (GC-FID), a broad range of alkanes, including linear n-alkanes [...] Read more.
This study focused on the investigation of soil samples from the alluvial zone of the Sava River, located near the heating plant in New Belgrade, Serbia. Using gas chromatography with flame ionization detection (GC-FID), a broad range of alkanes, including linear n-alkanes (C10 to C33) and isoprenoids, was analyzed in all samples. The obtained datasets were effectively made simpler by applying multivariate statistical analysis. Various geochemical indices (CPI, ACL, AI, TAR, etc.) and ratios (S/L, Paq, Pwax, etc.) were calculated and used to distinguish between biogenic and anthropogenic contributions. This approach added a higher level of precision to the source identification of hydrocarbons and provided a detailed geochemical characterization of the investigated soil. The results showed that the topsoil had a high content of TPH (average value, 90.65 mg kg−1), potentially related to an accidental oil spill that occurred repeatedly over extended periods. The uncommon n-alkane profiles reported for the investigated soil samples are probably the result of inputs related to anthropogenic sources, emphasizing that petroleum was the main source of the short-chain n-alkanes. The methodology developed in this study was proven to be efficient for the assessment of the environmental quality of the soil in an urban part of New Belgrade, but it can also be a useful tool for soil monitoring and for a pollution assessment in other (sub)urban areas. Full article
(This article belongs to the Special Issue Environmental Analysis of Organic Pollutants, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Investigated area—the heating plant in New Belgrade, Serbia, and the positions of sampling microlocations.</p>
Full article ">Figure 2
<p>GC-FID chromatograms of <span class="html-italic">n</span>-alkanes in some soil samples from the alluvial area of New Belgrade.</p>
Full article ">Figure 3
<p>Box and whisker plot showing the variation of geochemical indices in the investigated soil samples.</p>
Full article ">Figure 4
<p>Ternary diagram showing relative abundance of the carbon preference indices: short-chain alkane homologs, CPI1, mid-chain <span class="html-italic">n</span>-alkanes, CPI2, and all <span class="html-italic">n</span>-alkanes homologs (CPI3) of investigated soil samples.</p>
Full article ">Figure 5
<p>Relationship between the Average Chain Length and the Carbon Preference Index CPI2 in the urban soil samples.</p>
Full article ">Figure 6
<p>Cross-plots of the terrigenous/aquatic ratio, versus proxy ratios: submerged/floating aquatic macrophyte (Paq) (<b>a</b>) and terrestrial plants Pwax (<b>b</b>).</p>
Full article ">Figure 7
<p>The dendrogram was obtained by applying the Q-mode hierarchical cluster analysis on the evaluation indices calculated on the basis of the distribution of <span class="html-italic">n</span>-alkanes in OM of the urban soils analyzed in this study.</p>
Full article ">
20 pages, 7507 KiB  
Article
Sliding-Window Dissimilarity Cross-Attention for Near-Real-Time Building Change Detection
by Wen Lu and Minh Nguyen
Remote Sens. 2025, 17(1), 135; https://doi.org/10.3390/rs17010135 - 2 Jan 2025
Viewed by 275
Abstract
A near-real-time change detection network can consistently identify unauthorized construction activities over a wide area, empowering authorities to enforce regulations efficiently. Furthermore, it can promptly assess building damage, enabling expedited rescue efforts. The extensive adoption of deep learning in change detection has prompted [...] Read more.
A near-real-time change detection network can consistently identify unauthorized construction activities over a wide area, empowering authorities to enforce regulations efficiently. Furthermore, it can promptly assess building damage, enabling expedited rescue efforts. The extensive adoption of deep learning in change detection has prompted a predominant emphasis on enhancing detection performance, primarily through the expansion of the depth and width of networks, overlooking considerations regarding inference time and computational cost. To accurately represent the spatio-temporal semantic correlations between pre-change and post-change images, we create an innovative transformer attention mechanism named Sliding-Window Dissimilarity Cross-Attention (SWDCA), which detects spatio-temporal semantic discrepancies by explicitly modeling the dissimilarity of bi-temporal tokens, departing from the mono-temporal similarity attention typically used in conventional transformers. In order to fulfill the near-real-time requirement, SWDCA employs a sliding-window scheme to limit the range of the cross-attention mechanism within a predetermined window/dilated window size. This approach not only excludes distant and irrelevant information but also reduces computational cost. Furthermore, we develop a lightweight Siamese backbone for extracting building and environmental features. Subsequently, we integrate an SWDCA module into this backbone, forming an efficient change detection network. Quantitative evaluations and visual analyses of thorough experiments verify that our method achieves top-tier accuracy on two building change detection datasets of remote sensing imagery, while also achieving a real-time inference speed of 33.2 FPS on a mobile GPU. Full article
(This article belongs to the Special Issue Remote Sensing and SAR for Building Monitoring)
Show Figures

Figure 1

Figure 1
<p>The size of circles represents the number of network parameters; circles positioned closer to the top left indicate better performance. The computational complexity, quantified by Multiply–Accumulate Operations (MACs), was evaluated using bi-temporal image pairs with a resolution of 512 × 512 pixels.</p>
Full article ">Figure 2
<p>The structure of the sliding-window dissimilarity cross-attention module, where <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>A</mi> <mo>,</mo> <mi>B</mi> <mo>)</mo> <mo>∈</mo> <mo>{</mo> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> <mo>,</mo> <mo>(</mo> <mn>2</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> <mo>}</mo> </mrow> </semantics></math> denote two time points and ⊗ represents matrix multiplication.</p>
Full article ">Figure 3
<p>The architecture of our efficient change detection network, where ⊕ represents element-wise addition.</p>
Full article ">Figure 4
<p>Structures of MBConv and Fused-MBConv [<a href="#B30-remotesensing-17-00135" class="html-bibr">30</a>].</p>
Full article ">Figure 5
<p>Comparison of building change predictions generated by BAT and the SWDCA network on the LEVIR-CD+ dataset.</p>
Full article ">Figure 6
<p>Comparison of building change predictions generated by BAT and the SWDCA network on the S2looking dataset.</p>
Full article ">Figure 7
<p>Comparison of building change predictions generated by various methods on the S2looking dataset.</p>
Full article ">Figure 8
<p>Failure cases on the S2looking dataset.</p>
Full article ">
24 pages, 3468 KiB  
Article
Adaptive Real-Time Translation Assistance Through Eye-Tracking
by Dimosthenis Minas, Eleanna Theodosiou, Konstantinos Roumpas and Michalis Xenos
AI 2025, 6(1), 5; https://doi.org/10.3390/ai6010005 - 2 Jan 2025
Viewed by 383
Abstract
This study introduces the Eye-tracking Translation Software (ETS), a system that leverages eye-tracking data and real-time translation to enhance reading flow for non-native language users in complex, technical texts. By measuring the fixation duration, we can detect moments of cognitive load, ETS selectively [...] Read more.
This study introduces the Eye-tracking Translation Software (ETS), a system that leverages eye-tracking data and real-time translation to enhance reading flow for non-native language users in complex, technical texts. By measuring the fixation duration, we can detect moments of cognitive load, ETS selectively provides translations, maintaining reading flow and engagement without undermining language learning. The key technological components include a desktop eye-tracker integrated with a custom Python-based application. Through a user-centered design, ETS dynamically adapts to individual reading needs, reducing cognitive strain by offering word-level translations when needed. A study involving 53 participants assessed ETS’s impact on reading speed, fixation duration, and user experience, with findings indicating improved comprehension and reading efficiency. Results demonstrated that gaze-based adaptations significantly improved their reading experience and reduced cognitive load. Participants positively rated ETS’s usability and were noted through preferences for customization, such as pop-up placement and sentence-level translations. Future work will integrate AI-driven adaptations, allowing the system to adjust based on user proficiency and reading behavior. The study contributes to the growing evidence of eye-tracking’s potential in educational and professional applications, offering a flexible, personalized approach to reading assistance that balances language exposure with real-time support. Full article
(This article belongs to the Special Issue Machine Learning for HCI: Cases, Trends and Challenges)
Show Figures

Figure 1

Figure 1
<p>Mockup screen of the original translation pop-up, where user frustration was identified.</p>
Full article ">Figure 2
<p>The final graphical user interface of ETS is when the user triggers a translation.</p>
Full article ">Figure 3
<p>The user’s document is displayed as scrollable content, and a connection with the eye-tracker can be initialized.</p>
Full article ">Figure 4
<p>Experimental laboratory.</p>
Full article ">Figure 5
<p>Word with and without ETS.</p>
Full article ">
16 pages, 3567 KiB  
Article
Research on Lightweight Algorithm Model for Precise Recognition and Detection of Outdoor Strawberries Based on Improved YOLOv5n
by Xiaoman Cao, Peng Zhong, Yihao Huang, Mingtao Huang, Zhengyan Huang, Tianlong Zou and He Xing
Agriculture 2025, 15(1), 90; https://doi.org/10.3390/agriculture15010090 - 2 Jan 2025
Viewed by 295
Abstract
When picking strawberries outdoors, due to factors such as light changes, obstacle occlusion, and small target detection objects, the phenomena of poor strawberry recognition accuracy and low recognition rate are caused. An improved YOLOv5n strawberry high-precision recognition algorithm is proposed. The algorithm uses [...] Read more.
When picking strawberries outdoors, due to factors such as light changes, obstacle occlusion, and small target detection objects, the phenomena of poor strawberry recognition accuracy and low recognition rate are caused. An improved YOLOv5n strawberry high-precision recognition algorithm is proposed. The algorithm uses FasterNet to replace the original YOLOv5n backbone network and improves the detection rate. The MobileViT attention mechanism module is added to improve the feature extraction ability of small target objects so that the model has higher detection accuracy and smaller module sizes. The CBAM hybrid attention module and C2f module are introduced to improve the feature expression ability of the neural network, enrich the gradient flow information, and improve the performance and accuracy of the model. The SPPELAN module is added as well to improve the model’s detection efficiency for small objects. The experimental results show that the detection accuracy of the improved model is 98.94%, the recall rate is 99.12%, the model volume is 53.22 MB, and the mAP value is 99.43%. Compared with the original YOLOv5n, the detection accuracy increased by 14.68%, and the recall rate increased by 11.37%. This technology has effectively accomplished the accurate detection and identification of strawberries under complex outdoor conditions and provided a theoretical basis for accurate outdoor identification and precise picking technology. Full article
Show Figures

Figure 1

Figure 1
<p>Three types of strawberry images.</p>
Full article ">Figure 2
<p>Marking effect diagram.</p>
Full article ">Figure 3
<p>Improved YOLOv5n network structure diagram. Note: FasterNet is the backbone network; Neck is a bottleneck structure. C represents the number of channels, H represents height, and W represents width.</p>
Full article ">Figure 4
<p>MobileViT attention mechanism module.</p>
Full article ">Figure 5
<p>Structure of C2f.</p>
Full article ">Figure 6
<p>SPPELAN module network diagram.</p>
Full article ">Figure 7
<p>Outdoor detection effect picture.</p>
Full article ">Figure 7 Cont.
<p>Outdoor detection effect picture.</p>
Full article ">Figure 8
<p>Different model training process curves.</p>
Full article ">
Back to TopTop