[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,407)

Search Parameters:
Keywords = privacy preservation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
48 pages, 1598 KiB  
Article
Trustworthy AI for Whom? GenAI Detection Techniques of Trust Through Decentralized Web3 Ecosystems
by Igor Calzada, Géza Németh and Mohammed Salah Al-Radhi
Big Data Cogn. Comput. 2025, 9(3), 62; https://doi.org/10.3390/bdcc9030062 - 6 Mar 2025
Viewed by 136
Abstract
As generative AI (GenAI) technologies proliferate, ensuring trust and transparency in digital ecosystems becomes increasingly critical, particularly within democratic frameworks. This article examines decentralized Web3 mechanisms—blockchain, decentralized autonomous organizations (DAOs), and data cooperatives—as foundational tools for enhancing trust in GenAI. These mechanisms are [...] Read more.
As generative AI (GenAI) technologies proliferate, ensuring trust and transparency in digital ecosystems becomes increasingly critical, particularly within democratic frameworks. This article examines decentralized Web3 mechanisms—blockchain, decentralized autonomous organizations (DAOs), and data cooperatives—as foundational tools for enhancing trust in GenAI. These mechanisms are analyzed within the framework of the EU’s AI Act and the Draghi Report, focusing on their potential to support content authenticity, community-driven verification, and data sovereignty. Based on a systematic policy analysis, this article proposes a multi-layered framework to mitigate the risks of AI-generated misinformation. Specifically, as a result of this analysis, it identifies and evaluates seven detection techniques of trust stemming from the action research conducted in the Horizon Europe Lighthouse project called ENFIELD: (i) federated learning for decentralized AI detection, (ii) blockchain-based provenance tracking, (iii) zero-knowledge proofs for content authentication, (iv) DAOs for crowdsourced verification, (v) AI-powered digital watermarking, (vi) explainable AI (XAI) for content detection, and (vii) privacy-preserving machine learning (PPML). By leveraging these approaches, the framework strengthens AI governance through peer-to-peer (P2P) structures while addressing the socio-political challenges of AI-driven misinformation. Ultimately, this research contributes to the development of resilient democratic systems in an era of increasing technopolitical polarization. Full article
20 pages, 1435 KiB  
Article
Hardware Acceleration-Based Privacy-Aware Authentication Scheme for Internet of Vehicles Using Physical Unclonable Function
by Ujunwa Madububa Mbachu, Rabeea Fatima, Ahmed Sherif, Elbert Dockery, Mohamed Mahmoud, Maazen Alsabaan and Kasem Khalil
Sensors 2025, 25(5), 1629; https://doi.org/10.3390/s25051629 - 6 Mar 2025
Viewed by 147
Abstract
Due to technological advancement, the advent of smart cities has facilitated the deployment of advanced urban management systems. This integration has been made possible through the Internet of Vehicles (IoV), a foundational technology. By connecting smart cities with vehicles, the IoV enhances the [...] Read more.
Due to technological advancement, the advent of smart cities has facilitated the deployment of advanced urban management systems. This integration has been made possible through the Internet of Vehicles (IoV), a foundational technology. By connecting smart cities with vehicles, the IoV enhances the safety and efficiency of transportation. This interconnected system facilitates wireless communication among vehicles, enabling the exchange of crucial traffic information. However, this significant technological advancement also raises concerns regarding privacy for individual users. This paper presents an innovative privacy-preserving authentication scheme focusing on IoV using physical unclonable functions (PUFs). This scheme employs the k-nearest neighbor (KNN) encryption technique, which possesses a multi-multi searching property. The main objective of this scheme is to authenticate autonomous vehicles (AVs) within the IoV framework. An innovative PUF design is applied to generate random keys for our authentication scheme to enhance security. This two-layer security approach protects against various cyber-attacks, including fraudulent identities, man-in-the-middle attacks, and unauthorized access to individual user information. Due to the substantial amount of information that needs to be processed for authentication purposes, our scheme is implemented using hardware acceleration on an Nexys A7-100T FPGA board. Our analysis of privacy and security illustrates the effective accomplishment of specified design goals. Furthermore, the performance analysis reveals that our approach imposes a minimal communication and computational burden and optimally utilizes hardware resources to accomplish design objectives. The results show that the proposed authentication scheme exhibits a non-linear increase in encryption time with a growing AV ID size, starting at 5μs for 100 bits and rising to 39 μs for 800 bits. Also, the result demonstrates a more gradual, linear increase in the search time with a growing AV ID size, starting at less than 1 μs for 100 bits and rising to less than 8 μs for 800 bits. Additionally, for hardware utilization, our scheme uses only 25% from DSP slides and IO pins, 22.2% from BRAM, 5.6% from flip-flops, and 24.3% from LUTs. Full article
Show Figures

Figure 1

Figure 1
<p>Network model.</p>
Full article ">Figure 2
<p>Proposed PUF architecture.</p>
Full article ">Figure 3
<p>The proposed method architecture.</p>
Full article ">Figure 4
<p>Communication overhead comparison with Refs. [<a href="#B35-sensors-25-01629" class="html-bibr">35</a>,<a href="#B37-sensors-25-01629" class="html-bibr">37</a>,<a href="#B38-sensors-25-01629" class="html-bibr">38</a>,<a href="#B39-sensors-25-01629" class="html-bibr">39</a>].</p>
Full article ">Figure 5
<p>Encryption time.</p>
Full article ">Figure 6
<p>Search time.</p>
Full article ">Figure 7
<p>Encryption time vs. number of AVs.</p>
Full article ">Figure 8
<p>Search time vs. number of AVs.</p>
Full article ">Figure 9
<p>Computational overhead comparison with Refs. [<a href="#B35-sensors-25-01629" class="html-bibr">35</a>,<a href="#B37-sensors-25-01629" class="html-bibr">37</a>,<a href="#B38-sensors-25-01629" class="html-bibr">38</a>,<a href="#B39-sensors-25-01629" class="html-bibr">39</a>].</p>
Full article ">Figure 10
<p>Reliability comparison.</p>
Full article ">Figure 11
<p>Randomness comparison.</p>
Full article ">Figure 12
<p>Uniqueness comparison.</p>
Full article ">
60 pages, 1482 KiB  
Systematic Review
Federated Learning for Cloud and Edge Security: A Systematic Review of Challenges and AI Opportunities
by Latifa Albshaier, Seetah Almarri and Abdullah Albuali
Electronics 2025, 14(5), 1019; https://doi.org/10.3390/electronics14051019 - 3 Mar 2025
Viewed by 200
Abstract
The ongoing evolution of cloud computing requires sustained attention to security, privacy, and compliance issues. The purpose of this paper is to systematically review the current literature regarding the application of federated learning (FL) and artificial intelligence (AI) to improve cloud computing security [...] Read more.
The ongoing evolution of cloud computing requires sustained attention to security, privacy, and compliance issues. The purpose of this paper is to systematically review the current literature regarding the application of federated learning (FL) and artificial intelligence (AI) to improve cloud computing security while preserving privacy, delivering real-time threat detection, and meeting regulatory requirements. The current research follows a systematic literature review (SLR) approach, which examined 30 studies published between 2020 and 2024 and followed the PRISMA 2020 checklist. The analysis shows that FL provides significant privacy risk reduction by 25%, especially in healthcare and similar domains, and it improves threat detection by 40% in critical infrastructure areas. A total of 80% of reviewed implementations showed improved privacy, but challenges like communication overhead and resource limitations persist, with 50% of studies reporting latency issues. To overcome these obstacles, this study also explores some emerging solutions, which include model compression, hybrid federated architectures, and cryptographic enhancements. Additionally, this paper demonstrates the unexploited capability of FL for real-time decision-making in dynamic edge environments and highlights its potential across autonomous systems, Industrial Internet of Things (IIoT), and cybersecurity frameworks. The paper’s proposed insights present a deployment strategy for FL models which enables scalable, secure, and privacy-preserving operations and will enable robust cloud security solutions in the AI era. Full article
Show Figures

Figure 1

Figure 1
<p>PRISMA flow diagram for literature selection.</p>
Full article ">Figure 2
<p>FL taxonomy.</p>
Full article ">Figure 3
<p>FL architecture.</p>
Full article ">Figure 4
<p>Cloud breaches from 2020–2023.</p>
Full article ">Figure 5
<p>Top cloud breaches.</p>
Full article ">Figure 6
<p>FL in cloud and edge architecture.</p>
Full article ">
20 pages, 833 KiB  
Article
Mobility Prediction and Resource-Aware Client Selection for Federated Learning in IoT
by Rana Albelaihi
Future Internet 2025, 17(3), 109; https://doi.org/10.3390/fi17030109 - 1 Mar 2025
Viewed by 141
Abstract
This paper presents the Mobility-Aware Client Selection (MACS) strategy, developed to address the challenges associated with client mobility in Federated Learning (FL). FL enables decentralized machine learning by allowing collaborative model training without sharing raw data, preserving privacy. However, client mobility and limited [...] Read more.
This paper presents the Mobility-Aware Client Selection (MACS) strategy, developed to address the challenges associated with client mobility in Federated Learning (FL). FL enables decentralized machine learning by allowing collaborative model training without sharing raw data, preserving privacy. However, client mobility and limited resources in IoT environments pose significant challenges to the efficiency and reliability of FL. MACS is designed to maximize client participation while ensuring timely updates under computational and communication constraints. The proposed approach incorporates a Mobility Prediction Model to forecast client connectivity and resource availability and a Resource-Aware Client Evaluation mechanism to assess eligibility based on predicted latencies. MACS optimizes client selection, improves convergence rates, and enhances overall system performance by employing these predictive capabilities and a dynamic resource allocation strategy. The evaluation includes comparisons with advanced baselines such as Reinforcement Learning-based FL (RL-based) and Deep Learning-based FL (DL-based), in addition to Static and Random selection methods. For the CIFAR dataset, MACS achieved a final accuracy of 95%, outperforming Static selection (85%), Random selection (80%), RL-based FL (90%), and DL-based FL (93%). Similarly, for the MNIST dataset, MACS reached 98% accuracy, surpassing Static selection (92%), Random selection (88%), RL-based FL (94%), and DL-based FL (96%). Additionally, MACS consistently required fewer iterations to achieve target accuracy levels, demonstrating its efficiency in dynamic IoT environments. This strategy provides a scalable and adaptable solution for sustainable federated learning across diverse IoT applications, including smart cities, healthcare, and industrial automation. Full article
Show Figures

Figure 1

Figure 1
<p>Federated Learning with mobile clients.</p>
Full article ">Figure 2
<p>Client selection across FL methods.</p>
Full article ">Figure 3
<p>Average data rate across FL methods.</p>
Full article ">Figure 4
<p>Computational capacity across FL methods.</p>
Full article ">Figure 5
<p>Delay comparison across FL methods.</p>
Full article ">Figure 6
<p>Coverage indicator across FL methods.</p>
Full article ">Figure 7
<p>Accuracy comparison for CIFAR dataset.</p>
Full article ">Figure 8
<p>Accuracy comparison for MNIST dataset.</p>
Full article ">
48 pages, 1061 KiB  
Review
Navigating Challenges and Harnessing Opportunities: Deep Learning Applications in Internet of Medical Things
by John Mulo, Hengshuo Liang, Mian Qian, Milon Biswas, Bharat Rawal, Yifan Guo and Wei Yu
Future Internet 2025, 17(3), 107; https://doi.org/10.3390/fi17030107 - 1 Mar 2025
Viewed by 400
Abstract
Integrating deep learning (DL) with the Internet of Medical Things (IoMT) is a paradigm shift in modern healthcare, offering enormous opportunities for patient care, diagnostics, and treatment. Implementing DL with IoMT has the potential to deliver better diagnosis, treatment, and patient management. However, [...] Read more.
Integrating deep learning (DL) with the Internet of Medical Things (IoMT) is a paradigm shift in modern healthcare, offering enormous opportunities for patient care, diagnostics, and treatment. Implementing DL with IoMT has the potential to deliver better diagnosis, treatment, and patient management. However, the practical implementation has challenges, including data quality, privacy, interoperability, and limited computational resources. This survey article provides a conceptual IoMT framework for healthcare, synthesizes and identifies the state-of-the-art solutions that tackle the challenges of the current applications of DL, and analyzes existing limitations and potential future developments. Through an analysis of case studies and real-world implementations, this work provides insights into best practices and lessons learned, including the importance of robust data preprocessing, integration with legacy systems, and human-centric design. Finally, we outline future research directions, emphasizing the development of transparent, scalable, and privacy-preserving DL models to realize the full potential of IoMT in healthcare. This survey aims to serve as a foundational reference for researchers and practitioners seeking to navigate the challenges and harness the opportunities in this rapidly evolving field. Full article
(This article belongs to the Special Issue The Future Internet of Medical Things, 3rd Edition)
Show Figures

Figure 1

Figure 1
<p>Layer architecture of smart healthcare.</p>
Full article ">Figure 2
<p>Problem space for DL-empowered IoMT.</p>
Full article ">Figure 3
<p>Workflow of DL models in IoMT for healthcare.</p>
Full article ">
18 pages, 3530 KiB  
Article
PPRD-FL: Privacy-Preserving Federated Learning Based on Randomized Parameter Selection and Dynamic Local Differential Privacy
by Jianlong Feng, Rongxin Guo and Jianqing Zhu
Electronics 2025, 14(5), 990; https://doi.org/10.3390/electronics14050990 - 28 Feb 2025
Viewed by 290
Abstract
As traditional federated learning algorithms often fall short in providing privacy protection, a growing body of research integrates local differential privacy methods into federated learning to strengthen privacy guarantees. However, under a fixed privacy budget, with the increase in the dimensionality of model [...] Read more.
As traditional federated learning algorithms often fall short in providing privacy protection, a growing body of research integrates local differential privacy methods into federated learning to strengthen privacy guarantees. However, under a fixed privacy budget, with the increase in the dimensionality of model parameters, the privacy budget allocated per parameter diminishes, which means that a larger amount of noise is required to meet privacy requirements. This escalation in noise may adversely affect the final model’s performance. For that, we propose a privacy protection federated learning (PPRD-FL) approach. First, we design a randomized parameter selection strategy that combines randomization with importance-based filtering, effectively addressing the privacy budget dilution problem by selecting only the most crucial parameters for global aggregation. Second, we develop a dynamic local differential privacy-based perturbation mechanism, which adjusts the noise levels according to the training phase, not only providing robustness and security but also optimizing the dynamic allocation of the privacy budget. Finally, our experiments have demonstrated that the proposed approach maintains a high performance while ensuring strong privacy guarantees. Full article
(This article belongs to the Special Issue Security and Privacy in Emerging Technologies)
Show Figures

Figure 1

Figure 1
<p>Privacy risks in federated learning.</p>
Full article ">Figure 2
<p>Architecture of the proposed privacy-preserving federated learning approach based on randomized parameter selection and dynamic local differential privacy (PPRD-FL).</p>
Full article ">Figure 3
<p>Impact of R-PSS and DLDP on model accuracy and loss over global rounds: (<b>a</b>) accuracy and (<b>b</b>) loss.</p>
Full article ">Figure 4
<p>Model accuracy and loss under different privacy levels <math display="inline"><semantics> <mi>ε</mi> </semantics></math> on the MNIST dataset: (<b>a</b>) accuracy and (<b>b</b>) loss.</p>
Full article ">Figure 5
<p>Model accuracy and loss under different privacy levels <math display="inline"><semantics> <mi>ε</mi> </semantics></math> on the Fashion-MNIST dataset: (<b>a</b>) accuracy and (<b>b</b>) loss.</p>
Full article ">Figure 6
<p>Model accuracy and loss under different privacy levels <math display="inline"><semantics> <mi>ε</mi> </semantics></math> on the CIFAR-10 dataset: (<b>a</b>) accuracy and (<b>b</b>) loss.</p>
Full article ">Figure 7
<p>Model accuracy under different schemes.</p>
Full article ">
22 pages, 433 KiB  
Article
Communication Efficient Secure Three-Party Computation Using Lookup Tables for RNN Inference
by Yulin Wu, Chuyi Liao, Xiaozhen Sun, Yuyun Shen and Tong Wu
Electronics 2025, 14(5), 985; https://doi.org/10.3390/electronics14050985 - 28 Feb 2025
Viewed by 157
Abstract
Many leading technology companies currently offer Machine Learning as a Service Platform, enabling developers and organizations to access the inference capabilities of pre-trained models via API calls. However, due to concerns over user data privacy, inter-enterprise competition, and legal and regulatory constraints, directly [...] Read more.
Many leading technology companies currently offer Machine Learning as a Service Platform, enabling developers and organizations to access the inference capabilities of pre-trained models via API calls. However, due to concerns over user data privacy, inter-enterprise competition, and legal and regulatory constraints, directly utilizing pre-trained models in the cloud for inference faces security challenges. In this paper, we propose communication-efficient secure three-party protocols for recurrent neural network (RNN) inference. First, we design novel three-party secret-sharing protocols for digit decomposition, B2A conversion, enabling efficient transformation of secret shares between Boolean and arithmetic rings. Then, we propose the lookup table-based secure three-party protocol. Unlike the intuitive way of directly looking up tables to obtain results, we compute the results by utilizing the inherent mathematical properties of binary lookup tables, and the communication complexity of the lookup table protocol is only related to the output bit width. We also design secure three-party protocols for key functions in the RNN, including matrix multiplication, sigmoid function, and Tanh function. Our protocol divides the computation into online and offline phase, and places most of the computations locally. The theoretical analysis shows that the communication round of our work was reduced from four rounds to one round. The experiment results show that compared with the current SOTA-SIRNN, the online communication overhead of sigmoid and tanh functions decreased by 80.39% and 79.94%, respectively. Full article
(This article belongs to the Special Issue Security and Privacy in Distributed Machine Learning)
Show Figures

Figure 1

Figure 1
<p>Example of a function with <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> inputs and <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> outputs represented as Boolean circuit and lookup table.</p>
Full article ">Figure 2
<p>System model of secure 3-party RNN inference.</p>
Full article ">
15 pages, 1427 KiB  
Article
Privacy-Preserving Data Sharing and Computing for Outsourced Policy Iteration with Attempt Records from Multiple Users
by Bangyan Chen and Jun Ye
Appl. Sci. 2025, 15(5), 2624; https://doi.org/10.3390/app15052624 - 28 Feb 2025
Viewed by 196
Abstract
Reinforcement learning is a machine learning framework that relies on a lot of trial-and-error processes to learn the best policy to maximize the cumulative reward through the interaction between the agent and the environment. In the actual use of this process, the computing [...] Read more.
Reinforcement learning is a machine learning framework that relies on a lot of trial-and-error processes to learn the best policy to maximize the cumulative reward through the interaction between the agent and the environment. In the actual use of this process, the computing resources possessed by a single user are limited so that the cooperation of multiple users are needed, but the joint learning of multiple users introduces the problem of privacy leakage. This research proposes a method to safely share the effort of multiple users in an encrypted state and perform the reinforcement learning with outsourcing service to reduce users calculations combined with the homomorphic properties of cryptographic algorithms and multi-key ciphertext fusion mechanism. The proposed scheme has provably security, and the experimental results show that it has an acceptable impact on performance while ensuring privacy protection. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>This is a wide figure. Schemes follow the same formatting. If there are multiple panels, they should be listed as: (<b>a</b>) A solution by policy after 6000 iterations. (<b>b</b>) A solution by random policy.</p>
Full article ">Figure 3
<p>Time costs under different times of iterations.</p>
Full article ">
18 pages, 2639 KiB  
Article
Privacy-Preserved Visual Simultaneous Localization and Mapping Based on a Dual-Component Approach
by Mingxu Yang, Chuhua Huang, Xin Huang and Shengjin Hou
Appl. Sci. 2025, 15(5), 2583; https://doi.org/10.3390/app15052583 - 27 Feb 2025
Viewed by 156
Abstract
Edge-assisted visual simultaneous localization and mapping (SLAM) is widely used in autonomous driving, robot navigation, and augmented reality for environmental perception, map construction, and real-time positioning. However, it poses significant privacy risks, as input images may contain sensitive information, and generated 3D point [...] Read more.
Edge-assisted visual simultaneous localization and mapping (SLAM) is widely used in autonomous driving, robot navigation, and augmented reality for environmental perception, map construction, and real-time positioning. However, it poses significant privacy risks, as input images may contain sensitive information, and generated 3D point clouds can reconstruct original scenes. To address these concerns, this paper proposes a dual-component privacy-preserving approach for visual SLAM. First, a privacy protection method for images is proposed, which combines object detection and image inpainting to protect privacy-sensitive information in images. Second, an encryption algorithm is introduced to convert 3D point cloud data into a 3D line cloud through dimensionality enhancement. Integrated with ORB-SLAM3, the proposed method is evaluated on the Oxford Robotcar and KITTI datasets. Results demonstrate that it effectively safeguards privacy-sensitive information while ORB-SLAM3 maintains accurate pose estimation in dynamic outdoor scenes. Furthermore, the encrypted line cloud prevents unauthorized attacks on recovering the original point cloud. This approach enhances privacy protection in visual SLAM and is expected to expand its potential applications. Full article
Show Figures

Figure 1

Figure 1
<p>Structure of our dual-component approach.</p>
Full article ">Figure 2
<p>Architecture of YOLOv8 [<a href="#B25-applsci-15-02583" class="html-bibr">25</a>].</p>
Full article ">Figure 3
<p>Architecture of EdgeConnect [<a href="#B26-applsci-15-02583" class="html-bibr">26</a>].</p>
Full article ">Figure 4
<p>Comparisonof detection results on the test set of KITTI Object Detection Evaluation 2012.</p>
Full article ">Figure 5
<p>Visualization of the original image and images generated by different image privacy-preserving algorithms.</p>
Full article ">Figure 6
<p>Visualization of the original point cloud, the lifted line cloud, and the restored point cloud of sequence 58.</p>
Full article ">Figure 7
<p>Visualization of two consecutive images from sequence 58.</p>
Full article ">Figure 8
<p>Comparison of pose trajectories between the original ORB-SLAM3 and privacy-preserved ORB-SLAM3 in the Oxford Robotcar and KITTI dataset. (<b>a</b>,<b>c</b>) Comparison of the trajectories of sequences 12 and 001. (<b>b</b>,<b>d</b>) The absolute pose errors of sequences 12 and 001, respectively.</p>
Full article ">Figure 9
<p>Comparison of pose trajectories between the ground truth and privacy-preserved ORB-SLAM3 in some sequences of the KITTI SLAM Evaluation 2012 dataset. (<b>a</b>,<b>c</b>) Comparison of the trajectories of sequence <b>SE-00</b> and <b>SE-05</b>. (<b>b</b>,<b>d</b>) The absolute pose errors of sequence SE-00 and SE-05, respectively.</p>
Full article ">
20 pages, 270 KiB  
Article
A Novel User Behavior Modeling Scheme for Edge Devices with Dynamic Privacy Budget Allocation
by Hua Zhang, Hao Huang and Cheng Peng
Electronics 2025, 14(5), 954; https://doi.org/10.3390/electronics14050954 - 27 Feb 2025
Viewed by 133
Abstract
Federated learning (FL) enables privacy-preserving collaborative model training across edge devices without exposing raw user data, but it is vulnerable to privacy leakage through shared model updates, making differential privacy (DP) essential. Existing DP-based FL methods, such as fixed-noise DP, suffer from excessive [...] Read more.
Federated learning (FL) enables privacy-preserving collaborative model training across edge devices without exposing raw user data, but it is vulnerable to privacy leakage through shared model updates, making differential privacy (DP) essential. Existing DP-based FL methods, such as fixed-noise DP, suffer from excessive noise injection and inefficient privacy budget allocation, which degrade model accuracy. To address these limitations, we propose an adaptive differential privacy mechanism that dynamically adjusts the noise based on gradient sensitivity, optimizing the privacy–accuracy trade-off, along with a hierarchical privacy budget management strategy to minimize cumulative privacy loss. We also incorporate communication-efficient techniques like gradient sparsification and quantization to reduce bandwidth usage without sacrificing privacy guarantees. Experimental results on three real-world datasets showed that our adaptive DP-FL method improved accuracy by up to 8.1%, reduced privacy loss by 38%, and lowered communication overhead by 15–18%. While promising, our method’s robustness against advanced privacy attacks and its scalability in real-world edge environments are areas for future exploration, highlighting the need for further validation in practical FL applications such as personalized recommendation and privacy-sensitive user behavior modeling. Full article
Show Figures

Figure 1

Figure 1
<p>Model accuracy across different privacy budgets.</p>
Full article ">Figure 2
<p>Comparison of FL privacy-preserving methods.</p>
Full article ">
31 pages, 1586 KiB  
Article
Privacy-Preserving and Verifiable Personalized Federated Learning
by Dailin Xie and Dan Li
Symmetry 2025, 17(3), 361; https://doi.org/10.3390/sym17030361 - 27 Feb 2025
Viewed by 94
Abstract
As an important branch of machine learning, federated learning still suffers from statistical heterogeneity. Therefore, personalized federated learning (PFL) is proposed to deal with this obstacle. However, the privacy of local and global gradients is still under threat in the scope of PFL. [...] Read more.
As an important branch of machine learning, federated learning still suffers from statistical heterogeneity. Therefore, personalized federated learning (PFL) is proposed to deal with this obstacle. However, the privacy of local and global gradients is still under threat in the scope of PFL. Additionally, the correctness of the aggregated result is unable to be identified. Therefore, we propose a secure and verifiable personalized federated learning protocol that could protect privacy using homomorphic encryption and verify the aggregated result using Lagrange interpolation and commitment. Furthermore, it could resist the collusion attacks performed by servers and clients who try to pass verification. Comprehensive theoretical analysis is provided to verify our protocol’s security. Extensive experiments on MNIST, Fashion-MNIST and CIFAR-10 are carried out to demonstrate the effectiveness of our protocol. Our model achieved accuracies of 88.25% in CIFAR-10, 99.01% in MNIST and 96.29% in Fashion-MNIST. The results show that our protocol could improve security while maintaining the classification accuracy of the training model. Full article
Show Figures

Figure 1

Figure 1
<p>System model of PPVP.</p>
Full article ">Figure 2
<p>Comparison of accuracy on CIFAR-10.</p>
Full article ">Figure 3
<p>Comparison of accuracy on (<b>a</b>) MNIST and (<b>b</b>) Fashion-MNIST.</p>
Full article ">Figure 4
<p>Comparison of computation overhead regarding Paillier encryption: (<b>a</b>) Encryption time based on different dimensions of gradients. (<b>b</b>) Encryption time based on different key sizes. (<b>c</b>) Decryption time based on different dimensions of gradients. (<b>d</b>) Decryption time based on different key sizes.</p>
Full article ">Figure 5
<p>Comparison of computation overhead regarding verification: (<b>a</b>) Verification time based on different dimensions of gradients in the stage of encryption. (<b>b</b>) Verification time based on different parameters in the stage of encryption. (<b>c</b>) Verification time based on different dimensions of gradients in the stage of decryption. (<b>d</b>) Verification time based on different parameters in the stage of decryption.</p>
Full article ">Figure 6
<p>Communication overhead between clients and server: (<b>a</b>) Based on different dimensions of gradients. (<b>b</b>) Based on the number of clients. (<b>c</b>) Based on different rounds. (<b>d</b>) Based on different numbers of groups.</p>
Full article ">
19 pages, 291 KiB  
Article
Towards Federated Robust Approximation of Nonlinear Systems with Differential Privacy Guarantee
by Zhijie Yang, Xiaolong Yan, Guoguang Chen, Mingli Niu and Xiaoli Tian
Electronics 2025, 14(5), 937; https://doi.org/10.3390/electronics14050937 - 26 Feb 2025
Viewed by 288
Abstract
Nonlinear systems, characterized by their complex and often unpredictable dynamics, are essential in various scientific and engineering applications. However, accurately modeling these systems remains challenging due to their nonlinearity, high-dimensional interactions, and the privacy concerns inherent in data-sensitive domains. Existing federated learning approaches [...] Read more.
Nonlinear systems, characterized by their complex and often unpredictable dynamics, are essential in various scientific and engineering applications. However, accurately modeling these systems remains challenging due to their nonlinearity, high-dimensional interactions, and the privacy concerns inherent in data-sensitive domains. Existing federated learning approaches struggle to model such complex behaviors, particularly due to their inability to capture high-dimensional interactions and their failure to maintain privacy while ensuring robust model performance. This paper presents a novel federated learning framework for the robust approximation of nonlinear systems, addressing these challenges by integrating differential privacy to protect sensitive data without compromising model utility. The proposed framework enables decentralized training across multiple clients, ensuring privacy through differential privacy mechanisms that mitigate risks of information leakage via gradient updates. Advanced neural network architectures are employed to effectively approximate nonlinear dynamics, with stability and scalability ensured by rigorous theoretical analysis. We compare our approach with both centralized and decentralized federated models, highlighting the advantages of our framework, particularly in terms of privacy preservation. Comprehensive experiments on benchmark datasets, such as the Lorenz system and real-world climate data, demonstrate that our federated model achieves comparable accuracy to centralized approaches while offering strong privacy guarantees. The system efficiently handles data heterogeneity and dynamic nonlinear behavior, scaling well with both the number of clients and model complexity. These findings demonstrate a pathway for the secure and scalable deployment of machine learning models in nonlinear system modeling, effectively balancing accuracy, privacy, and computational performance. Full article
Show Figures

Figure 1

Figure 1
<p>Computational time vs. number of clients.</p>
Full article ">Figure 2
<p>Heatmap: total computational time (seconds).</p>
Full article ">
20 pages, 2041 KiB  
Article
Top-k Shuffled Differential Privacy Federated Learning for Heterogeneous Data
by Di Xiao, Xinchun Fan and Lvjun Chen
Sensors 2025, 25(5), 1441; https://doi.org/10.3390/s25051441 - 26 Feb 2025
Viewed by 255
Abstract
Federated learning (FL) has emerged as a promising framework for training shared models across diverse participants, ensuring data remains securely stored on local devices. Despite its potential, FL still faces some critical challenges, including data heterogeneity, privacy risks, and substantial communication overhead. Current [...] Read more.
Federated learning (FL) has emerged as a promising framework for training shared models across diverse participants, ensuring data remains securely stored on local devices. Despite its potential, FL still faces some critical challenges, including data heterogeneity, privacy risks, and substantial communication overhead. Current privacy-preserving FL research frequently fails to tackle complexities posed by heterogeneous data adequately, hence increasing communication expenses. To tackle these issues, we propose a top-k shuffled differential privacy FL (TopkSDP-FL) framework tailored to heterogeneous data environments. To address the model drift issue effectively, we design a novel regularization for local training, drawing inspiration from contrastive learning. To enhance efficiency, we propose a bidirectional top-k communication mechanism that reduces uplink and downlink overhead while strengthening privacy protection through double amplification with the shuffle model. Additionally, we shuffle all local gradient parameters at the layer level to address privacy budget concerns associated with high-dimensional aggregation and repeated iterations. Finally, a formal privacy analysis confirms the privacy amplification effect of TopkSDP-FL. The experimental results further demonstrate its superiority over other state-of-the-art FL methods, with an average accuracy improvement of 3% compared to FedAvg and other leading algorithms under the non-IID scenario, while also reducing communication costs by over 90%. Full article
(This article belongs to the Special Issue Federated and Distributed Learning in IoT)
Show Figures

Figure 1

Figure 1
<p>The basic framework of federated learning.</p>
Full article ">Figure 2
<p>The workflow of TopkSDP-FL.</p>
Full article ">Figure 3
<p>The framework of the contrastive loss function in TopkSDP-FL.</p>
Full article ">Figure 4
<p>Data distribution heterogeneity with different <math display="inline"><semantics> <mi>γ</mi> </semantics></math> settings. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Test accuracy of different schemes on three datasets using six methods (i.e., FedAvg [<a href="#B2-sensors-25-01441" class="html-bibr">2</a>], FedProx [<a href="#B6-sensors-25-01441" class="html-bibr">6</a>], SCAFFOLD [<a href="#B7-sensors-25-01441" class="html-bibr">7</a>], MOON [<a href="#B8-sensors-25-01441" class="html-bibr">8</a>], FedADMM [<a href="#B25-sensors-25-01441" class="html-bibr">25</a>], SOLO, and TopkSDP-FL) with <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. (<b>a</b>) MNIST; (<b>b</b>) Fashion-MNIST; (<b>c</b>) CIFAR-10.</p>
Full article ">Figure 6
<p>The test accuracy of different sparsity <math display="inline"><semantics> <mrow> <mi>s</mi> <mi>r</mi> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. (<b>a</b>) MNIST; (<b>b</b>) CIFAR-10.</p>
Full article ">Figure 7
<p>Communication cost per round for different algorithms (i.e., FedAvg [<a href="#B2-sensors-25-01441" class="html-bibr">2</a>], FedProx [<a href="#B6-sensors-25-01441" class="html-bibr">6</a>], SCAFFOLD [<a href="#B7-sensors-25-01441" class="html-bibr">7</a>], FedADMM [<a href="#B25-sensors-25-01441" class="html-bibr">25</a>], MOON [<a href="#B8-sensors-25-01441" class="html-bibr">8</a>], and TopkSDP-FL). (<b>a</b>) MNIST; (<b>b</b>) CIFAR-10.</p>
Full article ">Figure 8
<p>Comparison of total training time on the MNIST dataset.</p>
Full article ">Figure 9
<p>The test accuracy under different sparsity levels (<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>r</mi> </mrow> </semantics></math>) and privacy budgets (<math display="inline"><semantics> <mi>ϵ</mi> </semantics></math>) with <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. (<b>a</b>) MNIST; (<b>b</b>) CIFAR-10.</p>
Full article ">
19 pages, 2208 KiB  
Article
A Novel Framework for Quantum-Enhanced Federated Learning with Edge Computing for Advanced Pain Assessment Using ECG Signals via Continuous Wavelet Transform Images
by Madankumar Balasubramani, Monisha Srinivasan, Wei-Horng Jean, Shou-Zen Fan and Jiann-Shing Shieh
Sensors 2025, 25(5), 1436; https://doi.org/10.3390/s25051436 - 26 Feb 2025
Viewed by 253
Abstract
Our research introduces a framework that integrates edge computing, quantum transfer learning, and federated learning to revolutionize pain level assessment through ECG signal analysis. The primary focus lies in developing a robust, privacy-preserving system that accurately classifies pain levels (low, medium, and high) [...] Read more.
Our research introduces a framework that integrates edge computing, quantum transfer learning, and federated learning to revolutionize pain level assessment through ECG signal analysis. The primary focus lies in developing a robust, privacy-preserving system that accurately classifies pain levels (low, medium, and high) by leveraging the intricate relationship between pain perception and autonomic nervous system responses captured in ECG signals. At the heart of our methodology lies a signal processing approach that transforms one-dimensional ECG signals into rich, two-dimensional Continuous Wavelet Transform (CWT) images. These transformations capture both temporal and frequency characteristics of pain-induced cardiac variations, providing a comprehensive representation of autonomic nervous system responses to different pain intensities. Our framework processes these CWT images through a sophisticated quantum–classical hybrid architecture, where edge devices perform initial preprocessing and feature extraction while maintaining data privacy. The cornerstone of our system is a Quantum Convolutional Hybrid Neural Network (QCHNN) that harnesses quantum entanglement properties to enhance feature detection and classification robustness. This quantum-enhanced approach is seamlessly integrated into a federated learning framework, allowing distributed training across multiple healthcare facilities while preserving patient privacy through secure aggregation protocols. The QCHNN demonstrated remarkable performance, achieving a classification accuracy of 94.8% in pain level assessment, significantly outperforming traditional machine learning approaches. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of Quantum Hybrid Convolutional Neural Network. This diagram outlines an ECG classification system using quantum machine learning. It involves converting raw ECG signals to scalograms, extracting features with ResNet18, processing features via a 9-qubit quantum circuit, and classifying with a linear and SoftMax layer.</p>
Full article ">Figure 2
<p>Quantum Transfer Learning. This diagram illustrates a system integrating Edge Computing, Quantum Transfer Learning, and a Federated Learning (FL) Server.</p>
Full article ">Figure 3
<p>Transformation of ECG signal to CWT image: (<b>a</b>) raw ECG signal and (<b>b</b>) CWT image.</p>
Full article ">Figure 4
<p>Quantum Circuit. The quantum circuit consists of multi-qubit controlled gates and measurement operations. controlled operations applied across multiple qubits and classical measurements at the end of the computation.</p>
Full article ">
32 pages, 2442 KiB  
Article
Federated Learning System for Dynamic Radio/MEC Resource Allocation and Slicing Control in Open Radio Access Network
by Mario Martínez-Morfa, Carlos Ruiz de Mendoza, Cristina Cervelló-Pastor and Sebastia Sallent-Ribes
Future Internet 2025, 17(3), 106; https://doi.org/10.3390/fi17030106 - 26 Feb 2025
Viewed by 248
Abstract
The evolution of cellular networks from fifth-generation (5G) architectures to beyond 5G (B5G) and sixth-generation (6G) systems necessitates innovative solutions to overcome the limitations of traditional Radio Access Network (RAN) infrastructures. Existing monolithic and proprietary RAN components restrict adaptability, interoperability, and optimal resource [...] Read more.
The evolution of cellular networks from fifth-generation (5G) architectures to beyond 5G (B5G) and sixth-generation (6G) systems necessitates innovative solutions to overcome the limitations of traditional Radio Access Network (RAN) infrastructures. Existing monolithic and proprietary RAN components restrict adaptability, interoperability, and optimal resource utilization, posing challenges in meeting the stringent requirements of next-generation applications. The Open Radio Access Network (O-RAN) and Multi-Access Edge Computing (MEC) have emerged as transformative paradigms, enabling disaggregation, virtualization, and real-time adaptability—which are key to achieving ultra-low latency, enhanced bandwidth efficiency, and intelligent resource management in future cellular systems. This paper presents a Federated Deep Reinforcement Learning (FDRL) framework for dynamic radio and edge computing resource allocation and slicing management in O-RAN environments. An Integer Linear Programming (ILP) model has also been developed, resulting in the proposed FDRL solution drastically reducing the system response time. On the other hand, unlike centralized Reinforcement Learning (RL) approaches, the proposed FDRL solution leverages Federated Learning (FL) to optimize performance while preserving data privacy and reducing communication overhead. Comparative evaluations against centralized models demonstrate that the federated approach improves learning efficiency and reduces bandwidth consumption. The system has been rigorously tested across multiple scenarios, including multi-client O-RAN environments and loss-of-synchronization conditions, confirming its resilience in distributed deployments. Additionally, a case study simulating realistic traffic profiles validates the proposed framework’s ability to dynamically manage radio and computational resources, ensuring efficient and adaptive O-RAN slicing for diverse and high-mobility scenarios. Full article
(This article belongs to the Special Issue AI and Security in 5G Cooperative Cognitive Radio Networks)
Show Figures

Figure 1

Figure 1
<p>O-RAN integration scenario in 5G and MEC systems.</p>
Full article ">Figure 2
<p>Iterative ILP.</p>
Full article ">Figure 3
<p>Bandwidth parts with mixed numerologies.</p>
Full article ">Figure 4
<p>Slices admission control algorithm.</p>
Full article ">Figure 5
<p>Proposed implementation scenario for the federated stage.</p>
Full article ">Figure 6
<p>Federated system scenario (Test 1).</p>
Full article ">Figure 7
<p>Reward obtained in Test 1 by Client 1 (<b>a</b>) and Client 2 (<b>b</b>).</p>
Full article ">Figure 8
<p>Federated system scenario (Test 2).</p>
Full article ">Figure 9
<p>Reward obtained in Test 2 by Client 1 (<b>a</b>), Client 2 (<b>b</b>), and Client 3 (<b>c</b>).</p>
Full article ">Figure 10
<p>Federated system scenario (Test 3).</p>
Full article ">Figure 11
<p>Reward obtained in Test 3 by Client 1 (<b>a</b>), Client 2 (<b>b</b>), and Client 3 (<b>c</b>).</p>
Full article ">Figure 12
<p>Example of a realistic use case implementation scenario.</p>
Full article ">Figure 13
<p>FL evaluation results in the use case for Client 1 (<b>a</b>), Client 2 (<b>b</b>), Client 3 (<b>c</b>), and Client 4 (<b>d</b>).</p>
Full article ">Figure 14
<p>Results of joint evaluation in the use case.</p>
Full article ">Figure 15
<p>Centralized ML vs FL.</p>
Full article ">Figure 16
<p>Comparison of training between centralized and federated systems.</p>
Full article ">
Back to TopTop