[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (50,324)

Search Parameters:
Keywords = classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1073 KiB  
Article
Uncertainty Quantification in Data Fusion Classifier for Ship-Wake Detection
by Maice Costa, Daniel Sobien, Ria Garg, Winnie Cheung, Justin Krometis and Justin A. Kauffman
Remote Sens. 2024, 16(24), 4669; https://doi.org/10.3390/rs16244669 (registering DOI) - 14 Dec 2024
Abstract
Using deep learning model predictions requires not only understanding the model’s confidence but also its uncertainty, so we know when to trust the prediction or require support from a human. In this study, we used Monte Carlo dropout (MCDO) to characterize the uncertainty [...] Read more.
Using deep learning model predictions requires not only understanding the model’s confidence but also its uncertainty, so we know when to trust the prediction or require support from a human. In this study, we used Monte Carlo dropout (MCDO) to characterize the uncertainty of deep learning image classification algorithms, including feature fusion models, on simulated synthetic aperture radar (SAR) images of persistent ship wakes. Comparing to a baseline, we used the distribution of predictions from dropout with simple mean value ensembling and the Kolmogorov—Smirnov (KS) test to classify in-domain and out-of-domain (OOD) test samples, created by rotating images to angles not present in the training data. Our objective was to improve the classification robustness and identify OOD images during the test time. The mean value ensembling did not improve the performance over the baseline, in that there was a –1.05% difference in the Matthews correlation coefficient (MCC) from the baseline model averaged across all SAR bands. The KS test, by contrast, saw an improvement of +12.5% difference in MCC and was able to identify the majority of OOD samples. Leveraging the full distribution of predictions improved the classification robustness and allowed labeling test images as OOD. The feature fusion models, however, did not improve the performance over the single SAR-band models, demonstrating that it is best to rely on the highest quality data source available (in our case, C-band). Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart outlining the simulation process that generated the simulated SAR data. Input parameters are on the left, green arrows indicate where the environmental parameters were injected into the pipeline, and the yellow arrows indicate where the sensor parameters were injected. IDP—initial data plane, SAS—surface active substance, IR—infrared [<a href="#B26-remotesensing-16-04669" class="html-bibr">26</a>]. Reprinted with permission from Ref. [<a href="#B26-remotesensing-16-04669" class="html-bibr">26</a>] 2023, Sobien.</p>
Full article ">Figure 2
<p>Example of two Kolmogorov–Smirnov (KS) test measurements relative to validation results for a wake positive case (blue). The plots are cumulative distribution functions (CDF), which measure the proportion (y-axis) of a distribution that is equal to or less than the prediction probability (x-axis). The in-domain wake distribution for a single image is shown in orange, with a measured KS of 0.47; and the out-of-domain (OOD) wake distribution for a single image is shown in green, with a measured KS of 1.0. The bi-directional arrows visually represent the measured KS score. The in- and out-of-domain results are from the same image, but the out-of-domain image has been rotated 30 degrees.</p>
Full article ">Figure 3
<p>The top row shows the baseline classifier results and the bottom row has the MCDO classifier results. Each column of subplots is for a different SAR band, meaning a model trained and evaluated on the corresponding band. The results in each subplot are grouped on the left-hand-side for in-domain angles, while results on the right-hand-side are OOD angles. Colors indicate the target or ground truth of the image, either orange for wake or blue for no-wake.</p>
Full article ">Figure 4
<p>C-band test results for in-domain (0-degree rotation on left-hand-side) and OOD (30-degree rotation on right-hand-side) predictions for no-wake ground truth (blue), wake ground truth (orange), and the reference validation CDF curves (black). The reference curve near 0 is for the no-wake images, while the reference curve near 1 is for the wake images. Each blue or orange line represents the distribution of outputs for a single image passing through the MCDO classifier 100 times.</p>
Full article ">Figure 5
<p>Strip plot showing the mean probability for the MCDO classifier prediction probabilities of each image. Color labels are based on the prediction from the KS value, where no-wake (blue) are CDF curves that are within a KS distance of 0.9 from the respective band’s no-wake validation data, wake (orange) are curves within KS 0.9 of the wake validation data, and wake out-of-domain are those curves that are greater than KS 0.9 from either validation curve.</p>
Full article ">Figure 6
<p>MCC results split by in-domain angles (<b>left</b>), OOD angles (<b>middle</b>), and all the image domains together (<b>right</b>). The baseline classifier results are in blue, the mean predicted probability of the MCDO classifier is in orange, and the KS predictions from the MCDO distributions are in green.</p>
Full article ">Figure 7
<p>Kernel density estimations for the distribution of C-band standard deviations (STD) for the MCDO Classifier. The 0-degree rotation (blue) is in-domain. The 30-, 60-, and 105-degree rotated images (orange, green, and red, respectively) are OOD.</p>
Full article ">Figure 8
<p>Standard deviation of the MCDO classifier results. Each column shows a different SAR band. The results in each subplot are grouped on the left-hand-side for in-domain angles, while the results on the right-hand-side are OOD angles. Color indicates the target or ground truth of the image, either orange for wake or blue for no-wake. The standard deviations of in-domain no-wake images and OOD images often overlap, making it hard to use standard deviation to distinguish between in- and out-of-domain images. Note that the circles are outlier data points within that given distribution.</p>
Full article ">
11 pages, 408 KiB  
Article
Domain Adversarial Convolutional Neural Network Improves the Accuracy and Generalizability of Wearable Sleep Assessment Technology
by Adonay S. Nunes, Matthew R. Patterson, Dawid Gerstel, Sheraz Khan, Christine C. Guo and Ali Neishabouri
Sensors 2024, 24(24), 7982; https://doi.org/10.3390/s24247982 (registering DOI) - 14 Dec 2024
Abstract
Wearable accelerometers are widely used as an ecologically valid and scalable solution for long-term at-home sleep monitoring in both clinical research and care. In this study, we applied a deep learning domain adversarial convolutional neural network (DACNN) model to this task and demonstrated [...] Read more.
Wearable accelerometers are widely used as an ecologically valid and scalable solution for long-term at-home sleep monitoring in both clinical research and care. In this study, we applied a deep learning domain adversarial convolutional neural network (DACNN) model to this task and demonstrated that this new model outperformed existing sleep algorithms in classifying sleep–wake and estimating sleep outcomes based on wrist-worn accelerometry. This model generalized well to another dataset based on different wearable devices and activity counts, achieving an accuracy of 80.1% (sensitivity 84% and specificity 58%). Compared to commonly used sleep algorithms, this model resulted in the smallest error in wake after sleep onset (MAE of 48.7, Cole–Kripke of 86.2, Sadeh of 108.2, z-angle of 57.5) and sleep efficiency (MAE of 11.8, Cole–Kripke of 18.4, Sadeh of 23.3, z-angle of 9.3) outcomes. Despite being around for many years, accelerometer-alone devices continue to be useful due to their low cost, long battery life, and ease of use. Improving the accuracy and generalizability of sleep algorithms for accelerometer wrist devices is of utmost importance. We here demonstrated that domain adversarial convolutional neural networks can improve the overall accuracy, especially the specificity, of sleep–wake classification using wrist-worn accelerometer data, substantiating its use as a scalable and valid approach for sleep outcome assessment in real life. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>Model architecture. The DAsleepCNN model is composed of modules of a convolutional layer, batch normalization, and max pooling; after three modules the output is flattened and sent to a label classifier where it predicts sleep–wake labels for the MESA dataset, and to a domain adversarial classifier that classifies the dataset domain of the input. For MESA samples, the input labels are the dataset label and sleep–wake label, for NC samples only the dataset label is provided. For inference, the domain adversarial component is not used, only the label classifier.</p>
Full article ">Figure 2
<p>Model accuracy. The accuracies of the four models presented are plotted for the NC and MESA datasets. For MESA, the highest accuracy was achieved by noDACNN25+25. However, this model had a marked drop in performance when applied on NC, showing poor generalizability. DACNN25+1, on the other hand, had a high accuracy on NC, which crucially was on par with its performance on MESA.</p>
Full article ">Figure 3
<p>Confusion matrices and ROC-AUC. On top, the confusion matrix and ROC-AUC are plotted for the NC dataset using the best-performing model with input past 25 + 1. On the bottom, the same plots are shown for the MESA dataset with the model input of the past 25 + 25.</p>
Full article ">Figure 4
<p>Average input values for correct and incorrect predictions in the datasets. The violin plots show the mean and 25th and 75th interquartile ranges for true and false predictions for the best-performing models. The left represents predictions from the NC dataset and the right from the MESA dataset.</p>
Full article ">
57 pages, 720 KiB  
Review
Exploring Kernel Machines and Support Vector Machines: Principles, Techniques, and Future Directions
by Ke-Lin Du, Bingchun Jiang, Jiabin Lu, Jingyu Hua and M. N. S. Swamy
Mathematics 2024, 12(24), 3935; https://doi.org/10.3390/math12243935 (registering DOI) - 13 Dec 2024
Abstract
The kernel method is a tool that converts data to a kernel space where operation can be performed. When converted to a high-dimensional feature space by using kernel functions, the data samples are more likely to be linearly separable. Traditional machine learning methods [...] Read more.
The kernel method is a tool that converts data to a kernel space where operation can be performed. When converted to a high-dimensional feature space by using kernel functions, the data samples are more likely to be linearly separable. Traditional machine learning methods can be extended to the kernel space, such as the radial basis function (RBF) network. As a kernel-based method, support vector machine (SVM) is one of the most popular nonparametric classification methods, and is optimal in terms of computational learning theory. Based on statistical learning theory and the maximum margin principle, SVM attempts to determine an optimal hyperplane by addressing a quadratic programming (QP) problem. Using Vapnik–Chervonenkis dimension theory, SVM maximizes generalization performance by finding the widest classification margin within the feature space. In this paper, kernel machines and SVMs are systematically introduced. We first describe how to turn classical methods into kernel machines, and then give a literature review of exciting kernel machines.We then introduce the SVM model, its principles, and various SVM training methods for classification, clustering, and regression. Related topics, including optimizing model architecture, are also discussed. We conclude by outlining future directions for kernel machines and SVMs. This article functions both as a state-of-the-art survey and a tutorial. Full article
(This article belongs to the Special Issue Matrix Factorization for Signal Processing and Machine Learning)
16 pages, 1308 KiB  
Article
Evaluating DL Model Scaling Trade-Offs During Inference via an Empirical Benchmark Analysis
by Demetris Trihinas, Panagiotis Michael and Moysis Symeonides
Future Internet 2024, 16(12), 468; https://doi.org/10.3390/fi16120468 (registering DOI) - 13 Dec 2024
Abstract
With generative Artificial Intelligence (AI) capturing public attention, the appetite of the technology sector for larger and more complex Deep Learning (DL) models is continuously growing. Traditionally, the focus in DL model development has been on scaling the neural network’s foundational structure to [...] Read more.
With generative Artificial Intelligence (AI) capturing public attention, the appetite of the technology sector for larger and more complex Deep Learning (DL) models is continuously growing. Traditionally, the focus in DL model development has been on scaling the neural network’s foundational structure to increase computational complexity and enhance the representational expressiveness of the model. However, with recent advancements in edge computing and 5G networks, DL models are now aggressively being deployed and utilized across the cloud–edge–IoT continuum for the realization of in situ intelligent IoT services. This paradigm shift introduces a growing need for AI practitioners, as a focus on inference costs, including latency, computational overhead, and energy efficiency, is long overdue. This work presents a benchmarking framework designed to assess DL model scaling across three key performance axes during model inference: classification accuracy, computational overhead, and latency. The framework’s utility is demonstrated through an empirical study involving various model structures and variants, as well as publicly available datasets for three popular DL use cases covering natural language understanding, object detection, and regression analysis. Full article
Show Figures

Figure 1

Figure 1
<p>High-level overview of a deep neural network.</p>
Full article ">Figure 2
<p>Pipeline of performance evaluation trade-offs.</p>
Full article ">Figure 3
<p>Inference quality (classification accuracy and MSE) with respect to model complexity. The presented plots include: (<b>a</b>) BERT model variants, (<b>b</b>) EfficientNet model variants, and (<b>c</b>) MLP-Regression model variants.</p>
Full article ">Figure 4
<p>Computational overhead of inference with respect to model complexity. The presented plots include: (<b>a</b>) BERT model variants, (<b>b</b>) EfficientNet model variants, and (<b>c</b>) MLP-Regression model variants.</p>
Full article ">Figure 5
<p>Inference latency with respect to model complexity. The presented plots include: (<b>a</b>) BERT model variants, (<b>b</b>) EfficientNet model variants, and (<b>c</b>) MLP-Regression model variants.</p>
Full article ">
22 pages, 828 KiB  
Article
MediScan: A Framework of U-Health and Prognostic AI Assessment on Medical Imaging
by Sibtain Syed, Rehan Ahmed, Arshad Iqbal, Naveed Ahmed and Mohammed Ali Alshara
J. Imaging 2024, 10(12), 322; https://doi.org/10.3390/jimaging10120322 - 13 Dec 2024
Abstract
With technological advancements, remarkable progress has been made with the convergence of health sciences and Artificial Intelligence (AI). Modern health systems are proposed to ease patient diagnostics. However, the challenge is to provide AI-based precautions to patients and doctors for more accurate risk [...] Read more.
With technological advancements, remarkable progress has been made with the convergence of health sciences and Artificial Intelligence (AI). Modern health systems are proposed to ease patient diagnostics. However, the challenge is to provide AI-based precautions to patients and doctors for more accurate risk assessment. The proposed healthcare system aims to integrate patients, doctors, laboratories, pharmacies, and administrative personnel use cases and their primary functions onto a single platform. The proposed framework can also process microscopic images, CT scans, X-rays, and MRI to classify malignancy and give doctors a set of AI precautions for patient risk assessment. The proposed framework incorporates various DCNN models for identifying different forms of tumors and fractures in the human body i.e., brain, bones, lungs, kidneys, and skin, and generating precautions with the help of the Fined-Tuned Large Language Model (LLM) i.e., Generative Pretrained Transformer 4 (GPT-4). With enough training data, DCNN can learn highly representative, data-driven, hierarchical image features. The GPT-4 model is selected for generating precautions due to its explanation, reasoning, memory, and accuracy on prior medical assessments and research studies. Classification models are evaluated by classification report (i.e., Recall, Precision, F1 Score, Support, Accuracy, and Macro and Weighted Average) and confusion matrix and have shown robust performance compared to the conventional schemes. Full article
25 pages, 844 KiB  
Article
Enriching Value of Big Data Cooperative Assets from a Time-Horizon Perspective
by Shaobo Ren, Patrick S. W. Fong and Yi Zhang
Sustainability 2024, 16(24), 10961; https://doi.org/10.3390/su162410961 - 13 Dec 2024
Abstract
Driven by the rise of big data, enterprises urgently need to accurately utilize users’ real-time and accumulated information to realize present value and establish long-term advantages, then achieving the sustainable development. Previous works identified value co-created through big data as “big data cooperative [...] Read more.
Driven by the rise of big data, enterprises urgently need to accurately utilize users’ real-time and accumulated information to realize present value and establish long-term advantages, then achieving the sustainable development. Previous works identified value co-created through big data as “big data cooperative assets”. However, while the mainstream research on this concept has primarily focused on analyzing its features, formation conditions, and influencing factors, particularly from the perspective of time-horizon value, an equally important area—the formation mechanism—has been neglected. To address this gap, this article constructs a classification framework of big data cooperative assets by combining time-horizon aspects with interaction dominators. It then examines the formation mechanisms of data link and data insight value through multi-case analysis. Overall, this research not only provides new perspectives for enriching the theoretical understanding of big data cooperative assets but also suggests useful practical guidelines for innovative interaction between enterprises and users in the age of data competition. In addition, improving the efficiency of realizing the value of big data cooperative assets helps the enterprise to better cope with external risks, such as market changes and policy adjustments, and maintain sound operations, further contributing to build a harmonious society and promote the construction of an ecological civilization. Full article
Show Figures

Figure 1

Figure 1
<p>The big data cooperative asset value classification framework.</p>
Full article ">Figure 2
<p>Understanding value formation in big data cooperative assets.</p>
Full article ">
21 pages, 2608 KiB  
Article
Voice Analysis in Dogs with Deep Learning: Development of a Fully Automatic Voice Analysis System for Bioacoustics Studies
by Mahmut Karaaslan, Bahaeddin Turkoglu, Ersin Kaya and Tunc Asuroglu
Sensors 2024, 24(24), 7978; https://doi.org/10.3390/s24247978 - 13 Dec 2024
Abstract
Extracting behavioral information from animal sounds has long been a focus of research in bioacoustics, as sound-derived data are crucial for understanding animal behavior and environmental interactions. Traditional methods, which involve manual review of extensive recordings, pose significant challenges. This study proposes an [...] Read more.
Extracting behavioral information from animal sounds has long been a focus of research in bioacoustics, as sound-derived data are crucial for understanding animal behavior and environmental interactions. Traditional methods, which involve manual review of extensive recordings, pose significant challenges. This study proposes an automated system for detecting and classifying animal vocalizations, enhancing efficiency in behavior analysis. The system uses a preprocessing step to segment relevant sound regions from audio recordings, followed by feature extraction using Short-Time Fourier Transform (STFT), Mel-frequency cepstral coefficients (MFCCs), and linear-frequency cepstral coefficients (LFCCs). These features are input into convolutional neural network (CNN) classifiers to evaluate performance. Experimental results demonstrate the effectiveness of different CNN models and feature extraction methods, with AlexNet, DenseNet, EfficientNet, ResNet50, and ResNet152 being evaluated. The system achieves high accuracy in classifying vocal behaviors, such as barking and howling in dogs, providing a robust tool for behavioral analysis. The study highlights the importance of automated systems in bioacoustics research and suggests future improvements using deep learning-based methods for enhanced classification performance. Full article
26 pages, 6618 KiB  
Article
Monitoring Saltmarsh Restoration in the Upper Bay of Fundy Using Multi-Temporal Sentinel-2 Imagery and Random Forests Classifier
by Swarna M. Naojee, Armand LaRocque, Brigitte Leblon, Gregory S. Norris, Myriam A. Barbeau and Matthew Rowland
Remote Sens. 2024, 16(24), 4667; https://doi.org/10.3390/rs16244667 - 13 Dec 2024
Abstract
Saltmarshes provide important ecosystem services, including coastline protection, but face decline due to human activities and climate change. There are increasing efforts to conserve and restore saltmarshes worldwide. Our study evaluated the effectiveness of Sentinel-2 satellite imagery to monitor landcover changes using a [...] Read more.
Saltmarshes provide important ecosystem services, including coastline protection, but face decline due to human activities and climate change. There are increasing efforts to conserve and restore saltmarshes worldwide. Our study evaluated the effectiveness of Sentinel-2 satellite imagery to monitor landcover changes using a saltmarsh restoration project undergoing its 9th to 12th year of recovery in the megatidal Bay of Fundy in Maritime Canada. Specifically, in 2019–2022, five satellite images per growing season were acquired. Random Forests classification for 13 landcover classes (ranging from bare mud to various plant communities) achieved a high overall classification accuracy, peaking at 96.43% in 2021. Field validation points confirmed this, with high validation accuracies reaching 93.02%. The classification results successfully distinguished ecologically significant classes, such as Spartina alternifloraS. patens mix. Our results reveal the appearance of high marsh species in restoration sites and elevational-based zonation patterns, indicating progression. They demonstrate the potential of Sentinel-2 imagery for monitoring saltmarsh restoration projects in north temperate latitudes, aiding management efforts. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the 4 saltmarsh sites in Aulac, New Brunswick, in Sentinel-2 imagery acquired on 3 May 2021. A and D are the reference sites, and B and C the restoration sites.</p>
Full article ">Figure 2
<p>Flowchart presenting the main image processing steps (input data in purple; image processing in light green, image classifier in pink; results in blue).</p>
Full article ">Figure 3
<p>Landcover map of reference site A obtained by applying the RF classifier to multi-temporal Sentinel-2 images for 2019, 2020, 2021, and 2022.</p>
Full article ">Figure 3 Cont.
<p>Landcover map of reference site A obtained by applying the RF classifier to multi-temporal Sentinel-2 images for 2019, 2020, 2021, and 2022.</p>
Full article ">Figure 4
<p>Landcover map of reference site D obtained by applying the RF classifier to multi-temporal Sentinel-2 images for 2019, 2020, 2021, and 2022.</p>
Full article ">Figure 5
<p>Landcover map of restoration site B obtained by applying the RF classifier to multi-temporal Sentinel-2 images for 2019, 2020, 2021, and 2022.</p>
Full article ">Figure 6
<p>Landcover map of restoration site C obtained by applying the RF classifier to multi-temporal Sentinel-2 images for 2019, 2020, 2021, and 2022.</p>
Full article ">
39 pages, 925 KiB  
Review
Machine Learning Techniques for Sensor-Based Human Activity Recognition with Data Heterogeneity—A Review
by Xiaozhou Ye, Kouichi Sakurai, Nirmal-Kumar C. Nair and Kevin I-Kai Wang
Sensors 2024, 24(24), 7975; https://doi.org/10.3390/s24247975 - 13 Dec 2024
Abstract
Sensor-based Human Activity Recognition (HAR) is crucial in ubiquitous computing, analyzing behaviors through multi-dimensional observations. Despite research progress, HAR confronts challenges, particularly in data distribution assumptions. Most studies assume uniform data distributions across datasets, contrasting with the varied nature of practical sensor data [...] Read more.
Sensor-based Human Activity Recognition (HAR) is crucial in ubiquitous computing, analyzing behaviors through multi-dimensional observations. Despite research progress, HAR confronts challenges, particularly in data distribution assumptions. Most studies assume uniform data distributions across datasets, contrasting with the varied nature of practical sensor data in human activities. Addressing data heterogeneity issues can improve performance, reduce computational costs, and aid in developing personalized, adaptive models with fewer annotated data. This review investigates how machine learning addresses data heterogeneity in HAR by categorizing data heterogeneity types, applying corresponding suitable machine learning methods, summarizing available datasets, and discussing future challenges. Full article
(This article belongs to the Special Issue Non-Intrusive Sensors for Human Activity Detection and Recognition)
15 pages, 17108 KiB  
Article
Investigations on the Performance of a 5 mm CdTe Timepix3 Detector for Compton Imaging Applications
by Juan S. Useche Parra, Gerardo Roque, Michael K. Schütz, Michael Fiederle and Simon Procz
Sensors 2024, 24(24), 7974; https://doi.org/10.3390/s24247974 - 13 Dec 2024
Abstract
Nuclear power plant decommissioning requires the rapid and accurate classification of radioactive waste in narrow spaces and under time constraints. Photon-counting detector technology offers an effective solution for the quick classification and detection of radioactive hotspots in a decommissioning environment. This paper characterizes [...] Read more.
Nuclear power plant decommissioning requires the rapid and accurate classification of radioactive waste in narrow spaces and under time constraints. Photon-counting detector technology offers an effective solution for the quick classification and detection of radioactive hotspots in a decommissioning environment. This paper characterizes a 5 CdTe Timepix3 detector and evaluates its feasibility as a single-layer Compton camera. The sensor’s electron mobility–lifetime product and resistivity are studied across bias voltages ranging from -100 to -3000, obtaining values of μeτe=1.2(1)e32V1, and two linear regions with resistivities of ρI=5.8(2)GΩ and ρII=4.1(1)GΩ. Additionally, two calibration methodologies are assessed to determine the most suitable for Compton applications, achieving an energy resolution of 16.3keV for the 137Cs photopeak. The electron’s drift time in the sensor is estimated to be 122.3 7.4ns using cosmic muons. Finally, a Compton reconstruction of two simultaneous point-like sources is performed, demonstrating the detector’s capability to accurately locate radiation hotspots with a 51 resolution. Full article
(This article belongs to the Special Issue Recent Advances in X-Ray Sensing and Imaging)
17 pages, 222 KiB  
Article
Practice and Prospect of Regulating Personal Data Protection in China
by Liping Yang, Yiling Lin and Bing Chen
Laws 2024, 13(6), 78; https://doi.org/10.3390/laws13060078 - 13 Dec 2024
Abstract
Privacy protection is a fundamental guarantee for secure data flows and the basic requirement for data security. A reasonable privacy protection system acts as a catalyst for unlocking the financial value of data. The current legislative framework for personal data protection in China, [...] Read more.
Privacy protection is a fundamental guarantee for secure data flows and the basic requirement for data security. A reasonable privacy protection system acts as a catalyst for unlocking the financial value of data. The current legislative framework for personal data protection in China, adhering to the principle of proportionality, establishes critical measures such as informed consent for data collection and processing, data classification and grading management, and remedies for data leakage and other risks. In addition, in judicial practice, typical disputes regarding personal information protection and privacy rights have been promoted to clarify the scope for collecting users’ personal information and biometric data. Although further improvements are needed in legislative, judicial, and technical approaches, China’s commitment and practice in personal data protection are noteworthy. The existing legislation, law enforcement, and technical practices play an increasingly vital role in realizing the financial value of data and are essential for international cooperation on privacy protection. Furthermore, it is crucial to actively explore cooperation mechanisms for cross-border data flows under the principle of data sovereignty, participate in developing international rules for cross-border data flows, and formulate different management norms for cross-border data flows across different industries. Full article
19 pages, 20601 KiB  
Article
The Influence of Climate Change and Socioeconomic Transformations on Land Use and NDVI in Ordos, China
by Yin Cao, Zhigang Ye and Yuhai Bao
Atmosphere 2024, 15(12), 1489; https://doi.org/10.3390/atmos15121489 - 13 Dec 2024
Abstract
Land use change is related to a series of core issues of global environmental change, such as environmental quality improvement, sustainable utilization of resources, energy reuse and climate change. In this study, Google Earth Engine (GEE), a remote sensing natural environment monitoring and [...] Read more.
Land use change is related to a series of core issues of global environmental change, such as environmental quality improvement, sustainable utilization of resources, energy reuse and climate change. In this study, Google Earth Engine (GEE), a remote sensing natural environment monitoring and analysis platform, was used to realize the combination of Landsat TM/OLI data images with spectral features and topographic features, and the random forest machine learning classification method was used to supervise and classify the low-cloud composite image data of Ordos City. The results show that: (1) GEE has a powerful computing function, which can realize efficient and high-precision in-depth analysis of long-term multi-temporal remote sensing images and monitoring of land use change, and the accuracy of acquisition can reach 87%. Compared with other data sets in the same period, the overall and local classification results are more distinct than ESRI (Environmental Systems Research Institute) and GlobeLand 30 data products. Slightly lower than the Institute of Aerospace Information Innovation of the Chinese Academy of Sciences to obtain global 30 m of land cover fine classification products. (2) The overall accuracy of the land cover data of Ordos City from 2003 to 2023 is between 79–87%, and the Kappa coefficient is between 0.79–0.84. (3) Climate, terrain, population and other interactive factors combined with socio-economic population data and national and local policies are the main factors affecting land use change between 2003 and 2023. Full article
Show Figures

Figure 1

Figure 1
<p>Ordos city bitmap.</p>
Full article ">Figure 2
<p>Technique flow chart.</p>
Full article ">Figure 3
<p>Comparison of the importance of different features.</p>
Full article ">Figure 4
<p>Area changes for each land use/land cover type: (<b>a</b>) Cultivated Land; (<b>b</b>) Forest Land; (<b>c</b>) Grassland; (<b>d</b>) Water Body; (<b>e</b>) Built-up Land; (<b>f</b>) Unused Land.</p>
Full article ">Figure 5
<p>Land Use Change Map of Ordos City: (<b>a</b>) 2003; (<b>b</b>) 2008; (<b>c</b>) 2013; (<b>d</b>) 2018; (<b>e</b>) 2023.</p>
Full article ">Figure 6
<p>Ordos City time series (2003–2023): (<b>a</b>) Rainfall; (<b>b</b>) LST.</p>
Full article ">Figure 7
<p>Economic Development Indicators of Ordos City from 2003 to 2022: (<b>a</b>) gross regional domestic product; (<b>b</b>) Total output of three industries.</p>
Full article ">Figure 8
<p>Livelihood Development Indicators of Ordos City from 2003 to 2022: (<b>a</b>) GDP per capita; (<b>b</b>) Per capita income.</p>
Full article ">Figure 9
<p>Urbanization Development Indicators from 2003 to 2022: (<b>a</b>) urbanization rate; (<b>b</b>) Year-end population.</p>
Full article ">Figure 10
<p>(<b>a</b>) The trend of NDVI changes in Ordos from 2003 to 2023; (<b>b</b>) The spatial distribution of NDVI change trends in Ordos from 2003 to 2023.</p>
Full article ">Figure 11
<p>Significant NDVI trend changes in Ordos from 2003 to 2023.</p>
Full article ">
54 pages, 7881 KiB  
Review
Spectroscopy-Based Methods and Supervised Machine Learning Applications for Milk Chemical Analysis in Dairy Ruminants
by Aikaterini-Artemis Agiomavriti, Maria P. Nikolopoulou, Thomas Bartzanas, Nikos Chorianopoulos, Konstantinos Demestichas and Athanasios I. Gelasakis
Chemosensors 2024, 12(12), 263; https://doi.org/10.3390/chemosensors12120263 - 13 Dec 2024
Abstract
Milk analysis is critical to determine its intrinsic quality, as well as its nutritional and economic value. Currently, the advancements and utilization of spectroscopy-based techniques combined with machine learning algorithms have made the development of analytical tools and real-time monitoring and prediction systems [...] Read more.
Milk analysis is critical to determine its intrinsic quality, as well as its nutritional and economic value. Currently, the advancements and utilization of spectroscopy-based techniques combined with machine learning algorithms have made the development of analytical tools and real-time monitoring and prediction systems in the dairy ruminant sector feasible. The objectives of the current review were (i) to describe the most widely applied spectroscopy-based and supervised machine learning methods utilized for the evaluation of milk components, origin, technological properties, adulterants, and drug residues, (ii) to present and compare the performance and adaptability of these methods and their most efficient combinations, providing insights into the strengths, weaknesses, opportunities, and challenges of the most promising ones regarding the capacity to be applied in milk quality monitoring systems both at the point-of-care and beyond, and (iii) to discuss their applicability and future perspectives for the integration of these methods in milk data analysis and decision support systems across the milk value-chain. Full article
Show Figures

Figure 1

Figure 1
<p>The electromagnetic spectrum and wavelength ranges of electromagnetic radiation.</p>
Full article ">Figure 2
<p>Example of light’s interaction with matter [<a href="#B14-chemosensors-12-00263" class="html-bibr">14</a>] (modified).</p>
Full article ">Figure 3
<p>Scatter light effects are generated by fat and protein particles in milk. The incident wavelength is smaller than the diameter of the particles, resulting in Mie scattering, which is demonstrated in the zoomed view [<a href="#B17-chemosensors-12-00263" class="html-bibr">17</a>].</p>
Full article ">Figure 4
<p>Different colors refract at different angles in a dispersive prism due to material dispersion; a wavelength-dependent refractive index divides white light into a spectrum [<a href="#B18-chemosensors-12-00263" class="html-bibr">18</a>].</p>
Full article ">Figure 5
<p>Optical sensors classification from the International Union of Pure and Applied Chemistry (IUPAC) [<a href="#B21-chemosensors-12-00263" class="html-bibr">21</a>].</p>
Full article ">Figure 6
<p>Milk application and spectroscopy methods [<a href="#B22-chemosensors-12-00263" class="html-bibr">22</a>].</p>
Full article ">Figure 7
<p>Illustration of the spectroscopy procedure.</p>
Full article ">Figure 8
<p>(<b>a</b>) Continuous spectrum: contains all wavelengths emitted by a light source, (<b>b</b>) Absorption spectrum: black lines where the electrons have absorbed the light photons, (<b>c</b>) Emission spectrum: color lines where photons have been released from the electrons when they fall to a lower energy level. The different colors correspond to specific wavelengths, representing distinct photon energies released when electrons transition to a lower energy state. These colors depend on the material and the energy transitions within the atoms or molecules [<a href="#B33-chemosensors-12-00263" class="html-bibr">33</a>].</p>
Full article ">Figure 9
<p>Near-infrared spectra of milk samples, each color represents a distinct sample. [<a href="#B34-chemosensors-12-00263" class="html-bibr">34</a>].</p>
Full article ">Figure 10
<p>Fourier transform infrared spectra of sheep (blue line), goat (green line), and cow milk (orange line) samples [<a href="#B35-chemosensors-12-00263" class="html-bibr">35</a>].</p>
Full article ">Figure 11
<p>Laser-induced breakdown spectroscopy spectra from liquid milk samples, illustrating the unique spectral lines for major elements (Mg, Ca, Na, etc.) [<a href="#B36-chemosensors-12-00263" class="html-bibr">36</a>].</p>
Full article ">Figure 12
<p>Near-infrared spectroscopy analytical methods and their integration into production processes.</p>
Full article ">Figure 13
<p>Supervised ML process of data.</p>
Full article ">Figure 14
<p>Supervised ML methods applied in dairy ruminants and milk analysis research [<a href="#B113-chemosensors-12-00263" class="html-bibr">113</a>,<a href="#B114-chemosensors-12-00263" class="html-bibr">114</a>,<a href="#B115-chemosensors-12-00263" class="html-bibr">115</a>].</p>
Full article ">Figure 15
<p>Example representation of a neural network model.</p>
Full article ">Figure 16
<p>Overview of the application of spectral technologies and machine learning for milk analysis.</p>
Full article ">
22 pages, 10004 KiB  
Article
High-Resolution Dynamic Monitoring of Rocky Desertification of Agricultural Land Based on Spatio-Temporal Fusion
by Xin Zhao, Zhongfa Zhou, Guijie Wu, Yangyang Long, Jiancheng Luo, Xingxin Huang, Jing Chen and Tianjun Wu
Land 2024, 13(12), 2173; https://doi.org/10.3390/land13122173 - 13 Dec 2024
Abstract
The current research on rocky desertification primarily prioritizes large-scale surveillance, with minimal attention given to internal agricultural areas. This study offers a comprehensive framework for bedrock extraction in agricultural areas, employing spatial constraints and spatio-temporal fusion methodologies. Utilizing the high resolution and capabilities [...] Read more.
The current research on rocky desertification primarily prioritizes large-scale surveillance, with minimal attention given to internal agricultural areas. This study offers a comprehensive framework for bedrock extraction in agricultural areas, employing spatial constraints and spatio-temporal fusion methodologies. Utilizing the high resolution and capabilities of Gaofen-2 imagery, we first delineate agricultural land, use these boundaries as spatial constraints to compute the agricultural land bedrock response Index (ABRI), and apply the spatial and temporal adaptive reflectance fusion model (STARFM) to achieve spatio-temporal fusion of Gaofen-2 imagery and Sentinel-2 imagery from multiple time periods, resulting in a high-spatio-temporal-resolution bedrock discrimination index (ABRI*) for analysis. This work demonstrates the pronounced rocky desertification phenomenon in the agricultural land in the study area. The ABRI* effectively captures this phenomenon, with the classification accuracy for the bedrock, based on the ABRI* derived from Gaofen-2 imagery, reaching 0.86. The bedrock exposure area in the farmland showed a decreasing trend from 2019 to 2021, a significant increase from 2021 to 2022, and a gradual decline from 2022 to 2024. Cultivation activities have a significant impact on rocky desertification within agricultural land. The ABRI significantly enhances the capabilities for the dynamic monitoring of rocky desertification in agricultural areas, providing data support for the management of specialized farmland. For vulnerable areas, timely adjustments to planting schemes and the prioritization of intervention measures such as soil conservation, vegetation restoration, and water resource management could help to improve the resilience and stability of agriculture, particularly in karst regions. Full article
Show Figures

Figure 1

Figure 1
<p>Mapping of the study area. (Areas (<b>A</b>) and (<b>B</b>) show unmanned aerial vehicle (UAV) images. Sweet potatoes are mainly planted near rocks in Area (<b>A</b>), while corn is mainly planted near rocks in Area (<b>B</b>)).</p>
Full article ">Figure 2
<p>Cloud cover distribution of Sentinel-2 data in the study area (2019–2024).</p>
Full article ">Figure 3
<p>Technical workflow diagram(Subgraphs (<b>A</b>–<b>D</b>) represent the four steps:data collection, data preprocessing, cropland selection and index construction).</p>
Full article ">Figure 4
<p>Cultivated area selection process (where a, b, c, d represent the specific steps of cultivated area selection described in the text).</p>
Full article ">Figure 5
<p>Sample selection examples for rocky desertification and non-rocky desertification.</p>
Full article ">Figure 6
<p>Spectral reflectance curves of vegetation, bare soil, and rock types from S2 and GF2 data.</p>
Full article ">Figure 7
<p>Analysis of extraction results for cropland areas (MC denotes mean center of distribution; DDB denotes distribution’s standard deviation ellipse).</p>
Full article ">Figure 8
<p>Performance of Agricultural Land Bedrock Response Index ((<b>A</b>–<b>F</b>) represent different sub-sampling areas).</p>
Full article ">Figure 9
<p>Spatio-temporal fusion accuracy validation result figure. (Subplot 1 shows the extraction result of <math display="inline"><semantics> <msubsup> <mi>ABRI</mi> <mrow> <mi mathvariant="normal">S</mi> <mn>2</mn> </mrow> <mo>*</mo> </msubsup> </semantics></math> and the distribution of 500 validation points. Subplot 2 presents the quadratic function fitting correlation analysis between <math display="inline"><semantics> <msubsup> <mi>ABRI</mi> <mrow> <mi mathvariant="normal">S</mi> <mn>2</mn> </mrow> <mo>*</mo> </msubsup> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ABRI</mi> <mrow> <mi>GF</mi> <mn>2</mn> </mrow> </msub> </semantics></math>. Subplot 3 displays the histogram distribution of the results from 500 sample points for <math display="inline"><semantics> <msubsup> <mi>ABRI</mi> <mrow> <mi mathvariant="normal">S</mi> <mn>2</mn> </mrow> <mo>*</mo> </msubsup> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ABRI</mi> <mrow> <mi>GF</mi> <mn>2</mn> </mrow> </msub> </semantics></math>. Subplot 4 presents further analysis results of the 500 sample points. Subplot 5 shows the results of accuracy calculations for 500 sample points. ABRI represents the Cropland Bedrock Response Index, which is normalized to the 0–1 range. <math display="inline"><semantics> <msubsup> <mi>ABRI</mi> <mrow> <mi mathvariant="normal">S</mi> <mn>2</mn> </mrow> <mo>*</mo> </msubsup> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ABRI</mi> <mrow> <mi>GF</mi> <mn>2</mn> </mrow> </msub> </semantics></math> represent the fitted result of S2’s ABRI through the STAFMA model and the ABRI result from GF2, respectively. <math display="inline"><semantics> <msub> <mi mathvariant="normal">r</mi> <mi>pearson</mi> </msub> </semantics></math> refers to the Pearson correlation validation R index. RMSE represents the root mean square error. MAE represents the mean absolute error. Bias represents the mean bias.d represents the concordance index).</p>
Full article ">Figure 10
<p>Comparison of mean and variance in <math display="inline"><semantics> <msup> <mi>ABRI</mi> <mo>*</mo> </msup> </semantics></math> calculation results for multiple periods in the study area.</p>
Full article ">Figure 11
<p><math display="inline"><semantics> <msubsup> <mi>ABRI</mi> <mrow> <mi mathvariant="normal">S</mi> <mn>2</mn> </mrow> <mo>*</mo> </msubsup> </semantics></math> changes in rocky desertification areas (regions F).</p>
Full article ">Figure 12
<p>Distribution of rocky desertification in the study area (where _P represents the peak growing period and _D represents the non-peak growing period).</p>
Full article ">Figure 13
<p>Comparative analysis of accuracy between traditional rocky exposure indices and ABRI.</p>
Full article ">Figure 14
<p>Distribution of rocky desertification change trends in the study area.</p>
Full article ">Figure 15
<p>Comparison of rock desertification degree results with actual ground bedrock exposure results (where A and B represent two different regions; T1 and T2 represent 12 July 2021 and 18 January 2021, respectively; <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>M</mi> <mi>A</mi> <mi>G</mi> <msub> <mi>E</mi> <mrow> <mi>U</mi> <mi>A</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math> represents UAV imagery; RD_KBRI, RD_NDRI, and RD_SRI2 represent rock desertification degrees derived from different rock indices; ABRI_S2 and ABRI_S2* represent the 10 m resolution ABRI calculated from S2 and the 1 m resolution ABRI derived from spatio-temporal fusion, respectively; and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>O</mi> <mi>C</mi> <msub> <mi>K</mi> <mrow> <mi>A</mi> <mi>B</mi> <mi>R</mi> <mi>I</mi> <mo>∗</mo> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>O</mi> <mi>C</mi> <msub> <mi>K</mi> <mrow> <mi>U</mi> <mi>A</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math> represent the rock distribution obtained from ABRI* and UAV imagery, respectively).</p>
Full article ">
16 pages, 361 KiB  
Article
Stroke Dataset Modeling: Comparative Study of Machine Learning Classification Methods
by Kalina Kitova, Ivan Ivanov and Vincent Hooper
Algorithms 2024, 17(12), 571; https://doi.org/10.3390/a17120571 - 13 Dec 2024
Abstract
Stroke prediction is a vital research area due to its significant implications for public health. This comparative study offers a detailed evaluation of algorithmic methodologies and outcomes from three recent prominent studies on stroke prediction. Ivanov et al. tackled issues of imbalanced datasets [...] Read more.
Stroke prediction is a vital research area due to its significant implications for public health. This comparative study offers a detailed evaluation of algorithmic methodologies and outcomes from three recent prominent studies on stroke prediction. Ivanov et al. tackled issues of imbalanced datasets and algorithmic bias using deep learning techniques, achieving notable results with a 98% accuracy and a 97% recall rate. They utilized resampling methods to balance the classes and advanced imputation techniques to handle missing data, underscoring the critical role of data preprocessing in enhancing the performance of Support Vector Machines (SVMs). Hassan et al. addressed missing data and class imbalance using multiple imputations and the Synthetic Minority Oversampling Technique (SMOTE). They developed a Dense Stacking Ensemble (DSE) model with over 96% accuracy. Their results underscore the efficiency of ensemble learning techniques and imputation for handling imbalanced datasets in stroke prediction. Bathla et al. employed various classifiers and feature selection techniques, including SMOTE, for class balancing. Their Random Forest (RF) classifier, combined with Feature Importance (FI) selection, achieved an accuracy of 97.17%, illustrating the positive impact of RF and relevant feature selection on model performance. A comparative analysis indicated that Ivanov et al.’s method achieved the highest accuracy rate. However, the studies collectively highlight that the choice of models and techniques for stroke prediction should be tailored to the specific characteristics of the dataset used. This study emphasizes the importance of effective data management and model selection in enhancing predictive performance. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>Sparsity matrix for the dataset.</p>
Full article ">
Back to TopTop