[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Advances, Systems and Applications

Predictive digital twin driven trust model for cloud service providers with Fuzzy inferred trust score calculation

Abstract

Cloud computing has become integral to modern computing infrastructure, offering scalability, flexibility, and cost-effectiveness. Trust is a critical aspect of cloud computing, influencing user decisions in selecting Cloud Service Providers (CSPs). This paper provides a comprehensive review of existing trust models in cloud computing, including agreement-based, SLA-based, certificate-based, feedback-based, domain-based, prediction-based, and reputation-based models. Building on this foundation, we propose a novel methodology for creating a trust model in cloud computing using digital twins for CSPs. The digital twin is augmented with a fuzzy inference system, which computes the trust score of a CSP based on trust-related parameters. The architecture of the digital twin with the fuzzy inference system is detailed, outlining how it processes security parameter values obtained through penetration testing mechanisms. These parameter values are transformed into crisp values using a linear ridge regression function and then fed into the fuzzy inference system to calculate a final trust score for the CSP. The paper also presents the outputs of the fuzzy inference system, demonstrating how different security parameter inputs yield various trust scores. This methodology provides a robust framework for assessing CSP trustworthiness and enhancing decision-making processes in cloud service selection.

Introduction

Cloud computing is a transformative paradigm that has revolutionized the way we store, access, and utilize computing resources and services. It is essentially the distribution of numerous computer services through the internet, including storage, processing power, databases, networking, and software. Users can access distant data centers run by cloud service providers, which allows them to supplement their local hardware and infrastructure. This approach provides unmatched scalability, flexibility, and cost-effectiveness, which makes it a crucial component of contemporary IT environments.

Security in cloud computing is a critical concern due to the nature of cloud environments, where data, applications, and services are stored and processed on remote servers owned and managed by third-party providers. Although cloud computing has many advantages, such as scalability and cost-effectiveness, it also poses particular security concerns that need to be resolved as organizations and users are to maintain the privacy, accuracy, and accessibility of their data and systems. It is necessary to assess many facets of a cloud service provider’s security practices and capabilities to gauge how secure they are. Organizations and users can choose trustworthy and dependable services with the help of this assessment [1].

Checking for relevant certifications like ISO 27001 (Information Security Management System) and SOC 2 (Service Organisation Control) reports as well as, if applicable, confirming compliance with industry-specific standards like HIPAA (Health Insurance Portability and Accountability Act) or PCI DSS (Payment Card Industry Data Security Standard) are the general methods to gauge the security levels of cloud service providers. It is important to make sure that encryption key management procedures are reliable and well-documented and identify the encryption techniques used for data both in transit and at rest. Also checking the provider’s identity and access management (IAM) technologies, such as role-based access controls (RBAC) and multi-factor authentication (MFA), as well as whether the provider offers single sign-on (SSO) interaction with the organization’s authentication systems [2] is important. By examining the firewall configurations and network segmentation procedures used by the provider, as well as by checking the intrusion detection and prevention systems (IDS/IPS) to identify and mitigate potential threats, it is possible to confirm the network security of the CSP. It is necessary to analyze the provider’s patch management procedures and how quickly they resolve known vulnerabilities, we can learn more about their vulnerability management practices. It is necessary to make sure that the service provider regularly conducts penetration tests and vulnerability assessments and also examines the provider’s incident response strategy, which will detail how they identify, assess, and minimize security events. We will also assess the provider’s security monitoring procedures, which will include logging and network traffic analysis [3]. Access restrictions, surveillance, and environmental safeguards are all included in the physical security and data centers of cloud service providers. It is essential to confirm the provider’s business continuity and catastrophe recovery procedures. The provider’s security policies, methods, and practices can also be examined, along with the findings of independent security audits and assessments, in order to gauge the level of security. There is also a need to analyze the provider’s data protection practices, such as data retention rules and compliance with privacy laws.

A framework that aids in assessing and evaluating the trustworthiness of cloud service providers based on numerous criteria and characteristics is known as a trust model for cloud service providers. With the help of this model, businesses and individuals may choose the best cloud service provider for their requirements. Establishing trust in the provider’s capacity to provide dependable, secure, and compliant services is the aim. The major component that a trust model for cloud service providers might incorporate is the Security and Compliance. The trust model will consider elements like data encryption, access controls, authentication techniques, adherence to applicable industry standards (such as ISO 27001), and GDPR compliance, etc. Another important component is ServiceLevelAgreements (SLAs) where the ability of the provider to meet SLAs for uptime, availability, performance, and response times can be assessed using the trust model. SLA-abiding providers are frequently regarded as being more reliable [4]. It is very important to assess the possibility of vendor lock-in. Trustworthy providers will comply with industry standards and make it possible to migrate to another source or bring services in-house. Disaster recovery and business continuity policies will be in place for providers. The model might assess these preparations to measure the provider’s ability to properly handle disruptions. User evaluations and opinions might provide useful information about the provider’s performance and dependability. The trust model may consider user experiences reported through reviews and ratings. Regular third-party audits and relevant certifications (for example, SOC 2, PCI DSS) demonstrate a commitment to security and compliance [4]. Privacy protection is another important component where the model will evaluate the provider’s policies regarding user privacy, sensitive data handling, and data residency requirements. This can entail assessing user consent processes, data retention regulations, and data processing procedures. Transparency is an important component where trustworthy providers are frequently open and forthcoming about their operations, practices, and security measures. The model could take into account criteria such as the availability of extensive documentation, audit logs, and data handling transparency. It is necessary for cloud providers to clearly identify data ownership and provide data portability tools. The trust model could assess how well the provider supports data transit and retrieval when necessary [5]. The trust model could examine previous performance data from the supplier to evaluate patterns in uptime, availability, and responsiveness. Because of their capacity to deliver seamless services across many areas, providers with a worldwide presence and redundant data centers are frequently seen as more trustworthy. The model may evaluate the legal terms and conditions, as well as the contract terms supplied by the supplier, to ensure they meet the needs of the user. The main contributions of this paper can be summarised as follows:

  1. 1.

    A digital twin enabled trust evaluation model is designed which can conduct all the trust related assessments and help the cloud service provider to improve themselves by improving their trust score.

  2. 2.

    An algorithm using a Fuzzy inference system is proposed to calculate the trust score from the results of all vulnerability assessments conducted on cloud service providers.

  3. 3.

    An extensive experiment and analysis are conducted to evaluate the calculated trust score and how it gets altered with the various values of trust parameters.

The rest of the paper is organized as follows. Section "Existing trust models and trust calculation methods in the cloud: Literature survey" conducts a comprehensive literature survey on various trust models and existing methods for trust score calculation. Section "Digital twin: A comprehensive overview for background study" explains in detail about the different trust models and basic concepts of digital twin. A digital twin model for cloud service providers is proposed and an algorithm using a fuzzy inference system to calculate the trust score is explained with its architecture in Section " Digital twin-based trust score calculation integrating Fuzzy inference system". Also, section "Digital twin-based trust score calculation integrating Fuzzy inference system" covers the detailed steps to be included in the implementation of a Digital twin integrated with a Fuzzy inference system. Section "Results and Discussion" presents the detailed results and various interpretations of the experiment. Finally, Section "Conclusion and future enhancements" draws the conclusion, drawbacks and future enhancements possible.

Existing trust models and trust calculation methods in the cloud: literature survey

Trust models: a detailed study

Trust models play a crucial role in ensuring the security and reliability of cloud services. The trust evaluation model can be designed based on various aspects which include agreement-based, QoS-based, certificate-based, feedback based, etc.

In QoS based trust evaluation model, quality of service (QoS) refers to the non-functional characteristics of cloud services that reflect how well they are delivered, such as availability, dependability, responsiveness, and security. In fact, among functionally equivalent services, QoS is a critical differentiator that can help a company maintain and win new customers [6, 7]. The sophisticated methods for cloud service selection based on trust evaluation rely on measuring the QoS of each service and matching these QoS characteristics with user preferences, then recommending a service based on the degree of matching [8, 9]. To verify the dependability of a Cloud Service Provider (CSP), objective and subjective trust assessments are used to evaluate the QoS features of a specific cloud service [10] Table 1.

Table 1 Comparative study of different trust models in cloud

Trust models in cloud computing can be classified into several categories based on their underlying principles and methodologies. The main categories include agreement-based trust models, certificate-based models, feedback-based models, domain-based models, prediction-based models, reputation-based models, etc. Each classification offers unique insights into trust evaluation in cloud computing, catering to different aspects of security, reliability, and user satisfaction but it is important to acknowledge the interconnected nature of various trust evaluation approaches. While distinct categories have been outlined, it is recognized that there may be overlaps and interdependencies among these models.

Agreement-based/SLA-based trust models are built on contracts and agreements between Cloud Service Providers and Cloud Users. The two most common contracts are SLAs (service level agreements) and service policy reports which incorporates a number of security papers and QoS characteristics to foster confidence between two parties. The contract parameters monitoring module exchanges the agreement with the customer to build trust between the two sides [11]. In the SLA-based trust model, the service level must be monitored, and the results of the monitoring will serve as the foundation for determining objective trust. Traditional SLA monitoring approaches and tools are used to monitor network element layer and network layer performance characteristics [11].

In certificate based trust model in order to create trust between the CSPs and clients, they use certificates, trust tickets (TTs), and endorsement keys issued by the certificate authority. Security certificates for software, platforms, and infrastructure services are critical to building trust. Trust Tickets are issued to preserve the integrity and confidentiality of data kept in the cloud and to improve consumer trust [12]. Several certificates and secret keys used in the trust model are utilized to ensure control over data moved to and delivered to cloud clients [13]. Certification-based solutions enable a prior analysis of cloud activities and verification of non-functional aspects of cloud application services. Initial certification solutions for static, monolithic packages are developed and implemented during setup and installation. As a result, certification procedures are developed to regulate the idiosyncrasies and meet the needs imposed by service-based systems [14]. Customer feedback and opinion-gathering trust models that gauge consumer confidence in CSPs are included in the feedback-based trust model. As the initial stage in the trust evaluation process, several CSPs are registered with the trust model using the service registry module [15]. The feedback module then collects and stores client feedback on various QoS and security parameters offered by registered Cloud providers. The trust evaluation module calculates the CSPs’ trust score based on the feedback received.

Some domain-based trust models have been presented as selective trust models for the Cloud Computing environment. The basic idea behind this category is to divide the Cloud into distinct autonomous domains and discriminate between two types of intra-domain and inter-domain trust connections derived from direct and recommended trust tables, respectively. The intra-domain trust values are determined by transactions between entities within the same domain. An entity must first check the direct trust table to obtain the direct trust value (DTV) for another entity. If the DTV is not present, the entity looks for the proposed trust values from the other entities [11]. Prediction-based Trust models focus on how to select trustworthy services for users and appropriately measure the service’s QoS. Methods in this category include social network analysis (SNA), fuzzy theory, evidence theory, and probability theory. Mehdi et al. [16] proposed a QoS-aware technique based on probabilistic models to aid in service selection. This technique allows users to maintain a trust model for each service provider with whom they have engaged in order to foresee the most dependable service. Qu et al. [17] proposed a method that assesses the trustworthiness of cloud services based on the user’s fuzzy QoS needs and service dynamic performances to make service selection easier [18]. Reputation-based trust models play a crucial role since the reliability of cloud services may have an impact on the service provider’s reputation; thus, a reliable service provider is more likely to produce highly reliable services. Assessing and analyzing the reputations of cloud service providers based on history, experience, and third-party data relevant to cloud service providers can aid in the selection of dependable cloud services. Ramaswamy et al. [19] employ mobile agents’ monitoring methods, prize points, and penalties to ensure trustworthiness among cloud brokers, clients, and service providers. Mouratidis et al. [20] proposed a system with a modeling language to aid in the elicitation of security and privacy requirements for selecting the best service providers [18]. Subjective trust models divide trust into numerous subclasses, such as cloud execution trust, code trust, and authority trust. Probability set theory and fuzzy set theory are the two fundamental approaches for assessing the amount of confidence in information about a specific CSP and the services provided. Depending on which of the two strategies is used, probabilistic or fuzzy theory algorithms are employed to assign weights and assess the numerous subclasses of trust. After assessing the specific trust ratings for each of the sub-classes, a final trust value is calculated by averaging these trust values, which represents the overall trust of the Cloud provider [21].

Existing methods for trust score calculation

In the context of cloud computing, calculating a trust score entails evaluating the trustworthiness and security of various components inside the cloud environment. This can include assessing the dependability of cloud service providers, virtual machines, containers, network connections, and other components. Based on the security and reliability of cloud services, the trust score assists users and organizations in making educated decisions about which cloud resources to utilize. There are various trust score calculation methods based on various policies and mathematical concepts.

Ragavendiran et al. [22] offer a trust model developed from the Direct Trust computation model’s inter and intra-domain Direct Trust components. As a result, the goal is to boost user confidence, which may be reflected in the providers’ trust Score. The architecture’s two key components are User Bundles and Service Provider Data Centres. The Fuzzy technique is used to calculate the trust score. Only performance, cost, agility, time, and security are considered for trust score computation. Cloud service providers’ trust values, as well as their ranking, are established. The technique of three Service Broker Policies with three load balancing combinations leads in a trust score-based ranking of service providers. For analysis, the average trust score for each combination is obtained.

Parmar et al. [23] address the issue of trusted service selection and present a solution that uses Ordered Weight Averaging (OWA) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) to help the user choose a trustworthy service. The characteristics used to calculate trust include availability, accessibility, reaction time, and price. The weight for trust parameters is calculated using the ordered weight averaging approach. A decision matrix comprised of m CSs (services) and n trust parameters was created for this investigation. When the matrix (CS)mn is normalized, it becomes the matrix (NCS)mn. A set of weights Wj (for j=1, 2,..., n) must be chosen for the Trust parameter so that Wj = 1. These weights are calculated using the OWA technique. After that, the weighted normalized decision matrix is constructed. Then, for each trust parameter, identify the Ideal (AI) and Non-Ideal (AN) solutions. It is necessary to determine the distance between the ideal and non-ideal alternatives, as well as how close they are to the ideal option in terms of trust score.

Wang et al. [24] offer a trust value calculation method using grey correlation analysis to estimate the level of recommendation trust based on the degree of similarity. It is an evaluation model that combines weights and grey correlation analysis and presents the direct trust dynamic update approach. Meanwhile, the model can dynamically analyze cloud service comprehensive trust and identify the cloud service with the greatest level of comprehensive trust, considerably supporting consumers in their selection of the best cloud service. Direct trust, recommendation trust, and reputation all work together to build a more accurate total trust, increasing user enjoyment and interaction success rate. Because it considers both the transaction time and the transaction amount, the ultimate direct trust is exceptionally accurate and efficient. Rough set theory is used to calculate the objective weight, while AHP is used to determine the subjective weight. A grey relational analysis approach is used to identify the degree of similarity recommendation trust, make the suggestion trust more reasonable, combine it with the service provider’s own reputation, and then achieve comprehensive trust.

Hassan et al. [6] address a way to assist cloud customers in selecting the best cloud provider to carry out their jobs based on their preferences using an improved QoS-based trust evaluation model. This method calculates the Accumulative/Computed Trust Value (ATV) for each cloud provider who will provide a specific service S during a time interval 1 to k to determine how trustworthy they are. The ATV value is constantly changed at each transaction to reflect the most recent or current transaction conducted by the cloud service provider. The computation of trust value at each time window k also takes run-time estimates of a resource’s processing speed and computational power into account. For the trust computation, the trust factors evaluated include resource availability, resource success rate, turnaround efficiency, and data integrity. The model can be viewed as a four-layer architecture consisting of Cloud Service User (CU), Cloud Service Provider (CSP), Cloud Service Broker (CSB), and System Manager (SM). CSP provides its services in the cloud and releases service QoS information via SLA. A user can determine a suitable service that meets his QoS needs using SLA. Cloud Service Broker is made up of several sub-modules such as Directory service, SLA management, Feedback service, and so on. The key component is the System Manager, which includes the TAM (trust assessment module), which is used to analyze the ATV for each candidate CSP and generate a list of service providers ranked by ATV (accumulated trust value). Trust Catalogue is a database that saves transaction information and the computed trust value of the invoked service; its structure contains a record for each invoked service in the cloud. The feedback service is made up of three parts: a feedback collector, a feedback validator, and a feedback repository. The feedback collector was used to collect user feedback via a web form via a user portal, which is a web-based interface to interact with CSB. A feedback repository is a database that stores user feedback. A feedback verifier is used to detect malicious user’s feedback ratings based on the covariance technique, creditable users are detected using covariance, and fake users are extracted.

Mujawar et al. [25] identify a behavioral and feedback-based trust calculation scheme (BFTCS), which includes an algorithm for calculating behavioral trust, feedback trust, and cumulative trust. Cloud Users (CUs), Cloud Service Providers (CSPs), Cloud Nodes (CN), and Trust Computation Module (TCM) are the primary components of the proposed BFTCS technique. The suggested method’s major goal is to ensure that the service is given to CUs via a reliable resource or service provider. The TCM will be responsible for evaluating the trust values of a certain CSP before delivering the service to CU. The calculation of behavioral trust and feedback trust is used to evaluate trust for CSP in the cloud environment. Behavioral trust represents CSP behavior when offering services to CUs. It is calculated using numerous QoS parameters that reflect the CSP’s behavior. These characteristics include accessibility, honesty, reaction time, throughput, and so forth. The feedback trust represents users’ opinions on the services offered by the specific CSP. The various factors indicating feedback include user satisfaction, security measures supplied, reliability, service pricing, and so on. The amount of trustworthiness for a specific CSP is established based on the value of cumulative trust. The cumulative trust values will be in the [0-1] range. If the trust value falls below the specified limit value, the CSP is considered untrustworthy and its trust level is indicated as low. The cumulative trust value is used to classify CSPs as trustworthy, untrustworthy, or moderate. The process of verifying user feedback through correlation analysis and historical trust is also employed to ensure that only legitimate feedback is included.

A.Hussain et al. [26] proposed a five-stage technique for Optimal Service Selection (MOSS) that includes the prequel, assessment, ranking, integration, and consolidation/selection. In the precursor stage, a Pareto optimality-based approach to reduce the search area for cloud services by eliminating dominating providers. In the assessment stage, the Best Worst Method (BWM) [24] is employed for service evaluation based on two sets of criteria: (1) QoS and (2) QoE. Using a multi-MCDM technique, QoS and QoE-based ranks of cloud services during the ranking step are established. To get integrated ranks, integrate QoS and QoE-based ranks at the integration stage and finally, during the consolidation step, Copeland’s technique to create harmony between the opposing ranks of cloud services by different MCDM methods supplied by various vendors is used.

A Muñoz et al. [27] proposed a three-layered architecture for dynamic security monitoring in cloud computing applications to enhance monitoring efficiency and effectiveness. The Local Application Surveillance (LAS) layer keeps an eye out for instances of applications that defy rules and flags unusual activity. Simplifying monitoring efforts across platforms, the Intra Platform Surveillance (IPS) layer incorporates monitoring analysis from LAS. By externally regulating instances, the Global Application Surveillance (GAS) layer offers a higher-level perspective of the system and increases efficiency. In cloud environments, the architecture allows for extensive monitoring and the identification of complex issues by distributing monitoring tasks across different tiers.

Digital twin: a comprehensive overview for background study

A digital twin is a virtual replica of a physical model, system, or process. It is a digital counterpart that replicates the real-world entity in a digital setting. Digital twins use data from sensors, simulations, and other sources to construct a dynamic and accurate model that mimics the real-world entity’s behavior, traits, and changes over time. Digital twins are not static models; they are constantly updated with real-time data. This allows them to deliver a current and accurate depiction of the physical entity they mimic. Digital twins are employed in a variety of industries, including manufacturing, healthcare, automotive, energy, and others. They are used to describe industrial equipment, buildings, cars, and even entire cities. Predictive analytics is made possible by digital twins, which use historical and real-time data to estimate future behavior. This is very useful for optimizing processes and making sound decisions. Data from Internet of Things (IoT) devices, such as sensors and actuators, are frequently used in digital twins to gather real-world data and enable real-time updating of the digital representation. Digital twins are especially effective in complex systems where it is vital to understand and manage the interactions and dependencies of many components. There are three levels of digital twins used: unit level, system level, and SoS level [28].

Various existing efforts include digital twin-based fault diagnosis, deep transfer learning, prediction analysis, manufacturing, and so on [29]. They are built by merging multiple technologies such as 3D modeling, data analytics, sensors, and simulation to model their physical counterparts in a digital context. The first phase is data collection, which involves gathering information from the actual thing or system for which the digital twin is constructed. This information can originate from a variety of sources, including sensors, IoT devices, CAD models, historical records, and more. After collecting the data, it must be merged and organized into a coherent dataset. Cleaning the data, matching different data sources, and preparing it for analysis and modeling may all be part of this process. In the second phase, the integrated dataset is used to recreate the behavior of a physical object or system in the digital environment using mathematical models, algorithms, and simulation approaches. Depending on the level of accuracy required and the complexity of the physical system, these models can range from simple to complex ones. 3D modeling and visualization tools are used to generate a visually realistic representation of a physical object or system. This stage entails generating a virtual 3D model of the object that precisely replicates its physical properties and appearance.

To ensure that the digital representation accurately reflects the actual condition of the physical object or system, digital twins frequently incorporate real-time data from sensors and other monitoring equipment. This real-time data can be utilized to change the parameters, behavior, and performance of the digital twin. To analyze the data collected from the physical system and the digital twin, data analytics technologies are used. This can provide information about the system’s performance, predict prospective problems, and optimize operations. Different environments and interactions can be simulated with digital twins. Engineers and operators can interact with the digital twin before executing changes, enhancements, or troubleshooting procedures on the actual physical system. It is an iterative procedure to create an accurate and successful digital twin. The digital twin gets more accurate and better matched with the physical system’s behavior as more data is collected and the model is developed [30].

Zheng et al. [31] examined the concept of digital twins in both the limited and broad senses. Physical space, virtual space, and an information-processing layer were the three components of the suggested application framework. The digital twin can integrate full-scale system mapping, dynamic modeling throughout the lifecycle, and real-time optimization of the entire process during the application process. Grieves and Vickers [32] distinguished two sorts of digital twins: digital twin prototypes (DTPs) and digital twin instances (DTIs), with DTs operating within a digital twin environment (DTE). Schluse and Rossmann [33] developed a new idea of "Experimentable Digital Twins" and showed how these "Experimentable Digital Twins" can simplify simulation-based development processes, enable system-level detailed simulations, and build smart systems. Two studies on Experimental Digital Twins (EDTs) followed [34, 35]. The benefits of merging digital twins with IoT and system simulation to facilitate model-based system engineering (MBSE) were explored by Madni et al. [36] There are four degrees of virtual representation: pre-digital twin, digital twin, adaptive digital twin, and intelligent digital twin. Product Digital Twins, Process Digital Twins, and Operation Digital Twins were characterized by Bao et al. [37] as three types of digital twin models from the standpoint of the manufacturing process on the shop floor. Ullah [38] distinguished three forms of digital twins: object twins, process twins, and phenomenon twins. Despite differing theories, the qualities of a digital twin have been agreed upon. Because many industry domains may widely use digital twins, several publications attempted to keep a broad number of systems inside the scope of the digital twin idea, resulting in certain misunderstandings and confusion.

Digital twin-based trust score calculation integrating Fuzzy inference system

Digital twin-based trust score calculation integrating a fuzzy inference system (FIS) involves combining the concepts of digital twins and fuzzy logic to evaluate the trustworthiness of entities, such as Cloud Service Providers (CSPs). Integrating digital twins with a fuzzy inference system offers a flexible and adaptive method for calculating trust scores in complex and dynamic environments like cloud computing.

Modeling of digital twin for a CSP

To model a digital twin for a particular CSP like AWS, Google Cloud, etc. with the primary objective as the trust score evaluation and security assessment, all trust parameters (in this case security, privacy, performance, dynamicity, and data integrity) and metrics are being monitored. This is possible by monitoring the results of various vulnerability tests and client feedback. Analyzing the virtual machines with each client, we gathered information on user interactions, access patterns, and usage statistics. Also monitored CPU usage, memory usage, network throughput, and latency. We also employed logs related to security events, access controls, and compliance records.

Depending upon the CSP for which the digital twin is modeled, corresponding specific architecture can be adopted for the development. A general architecture diagram is given the section Architecture of Digital twin enabled CSP integrated with the Fuzzy inference system. Any cloud platform like AWS, Azure, or Google Cloud can be chosen to deploy this digital twin. The mathematical and computational models are included within the digital twin. The entire fuzzy inference system with fuzzy rules and membership function to evaluate the trust score is integrated within the digital twin model. The following are the tools and technologies used for the deployment of a digital twin model for Google Cloud.

  • Data Collection: Google Cloud SDK, Google Cloud Monitoring API. Model Development: MATLAB Fuzzy Logic Toolbox.

  • Real-time Data Processing: MATLAB scripts with data collection and processing loops. Visualization and Dashboard: MATLAB plotting functions.

Trust score calculation process in predictive digital twin using ridge regression and Fuzzy inference System

Building a digital twin for a Cloud Service Provider (CSP) involves transforming the CSP’s physical infrastructure, services, and procedures into a virtual counterpart. The process begins with defining the scope of the CSP by pinpointing the key elements and aspects of its infrastructure and services provided to cloud customers. This step requires comprehensive information on the CSP’s physical infrastructure, network architecture, data centers, hardware specifications, software stack, and security measures, all of which can differ across CSPs. To accurately capture the essential components of the CSP environment, a basic data model representing the infrastructure and services must be created. This model will include algorithms for task distribution, resource allocation, and other dynamic activities [39].

Incorporating sensor data and real-time monitoring is crucial for maintaining the digital twin’s accuracy. The twin can be connected to the real monitoring systems used by the CSP, enabling it to reflect the current state of the physical infrastructure and services, as well as replicate security measures, protocols, and controls. This includes aspects such as identity management, access controls, encryption, and other security-related functions. Additionally, integrating with the CSP’s interfaces and APIs for data sharing allows the digital twin to interact with real CSP systems, collect real-time data, and respond to environmental changes [40]. To ensure the digital twin accurately represents the real CSP environment, thorough testing and validation against historical data and real-world scenarios are necessary. An iterative approach can be beneficial for refining the model in response to feedback, changes in specifications, and technological advancements. Regular updates to the digital twin are essential to accommodate any modifications to the CSP’s services and infrastructure, ensuring the twin remains a reliable and up-to-date virtual representation [41].

Phase I: data collection for digital twin

Digital twin of a cloud service provider must undergone through a set of penetration testing mechanisms which include network related penetration techniques ( like port scanning, network mapping, vulnerability scanning etc.), web application related penetration techniques(like SQL injection, cross site scripting, cross site request forgery etc), Cloud Infrastructure Testing(like Assessment of cloud security groups, network ACLs, firewall rules,review of Identity and Access Management (IAM) policies and roles etc), physical Security Testing, wireless Network Testing, Endpoint Security Testing (Vulnerability scanning of endpoint devices, Exploitation of vulnerabilities in endpoint software,Testing for weak or default credentials etc), Social Engineering Testing( like Phishing attacks, Impersonation to gain unauthorized access etc), Application Programming Interface (API) Testing (like testing for API authentication and authorization vulnerabilities, Fuzzy testing to identify potential vulnerabilities etc [42]), Red Team Testing(Combination of multiple penetration testing methods, Emulation of advanced persistent threats etc.), Incident Response Testing( like Simulated incident scenarios to test response procedures, Analysis of incident detection and notification capabilities, Evaluation of incident documentation and post-incident analysis etc.).

Establishing a link between the vulnerability assessment and the required security attributes, such as security, privacy, integrity, dynamicity, performance, and all of its subparameters, is the process of mapping penetration testing results to specific security parameters.

Phase II: trust parameter score estimation

A scoring system that measures the degree of vulnerability found in the penetration testing findings is used for the score estimation. Score estimation is the process of assigning numerical values to vulnerabilities based on their significance and impact on security parameters. The extent or severity of weaknesses or susceptibilities identified through penetration testing is the degree of vulnerability of the system. The overall impact of vulnerability as the cumulative effect of vulnerabilities on the security posture of the system or network. Each individual security feature is a specific aspect of security, such as confidentiality, integrity, availability, authentication, authorization, etc. The purpose of this scoring system is to correspond with the significance of every security characteristic and all of its subparameters. Next, provide a weighting factor based on the relevance of each penetration testing result to each security criterion. Utilizing this scoring methodology and weighting variables to calculate a composite score for each security parameter, which is a numerical representation of the overall impact of vulnerability on each individual security feature.

$$\text{SPs}=(w_1\cdot V_{S1})+(w_2\cdot V_{S2})+...+(w_n\cdot V_{Sn})$$
(1)

where:

Variable

Description

SPS

Security Parameter Index

VSi

Vulnerability Score for Parameter i

wi

Weighting Factor for Parameter i

A sample variable and its composite score for a particular CSP are given in Table 2.

Table 2 Sample variable table for security parameters

To make sure that all security parameter or subparameter values are on the same scale, they can be normalized after calculation, particularly if the ranges of each vulnerability score differ. This stage facilitates the interpretation and comparison of the results across various parameters. To understand the significance of the composite scores, thresholds can be set for each security parameter. The mapping can then be validated by evaluating how well it represents the system’s actual security status and making necessary adjustments based on user feedback, additional testing, or shifts in the threat landscape.

A range of values can be obtained for each of the parameters after the evaluation. For the generation of crisp values from this range of values ridge regression is the method adopted. Ridge regression is a type of linear regression that includes a regularization term, also known as the Ridge term or L2 penalty, to prevent overfitting in the presence of multicollinearity (high correlation among predictor variables). While ridge regression is typically used for predictive modeling, it can be adapted for many applications including the generation of crisp values from a range of values. In standard linear regression, the goal is to find a set of coefficients that minimizes the sum of squared differences between the predicted and actual values. Ridge regression modifies this objective function by adding a penalty term that discourages large coefficients [43].

The standard linear regression objective function is:

$$\text{Minimize}:\sum\limits_{i=1}^n\left(\mathrm{y}_{i} - \mathrm{\hat{y}}{}_{i}\right)^{2}$$
(2)

In ridge regression, the objective function is modified to include a regularization term:

$$\text{Minimize}:\sum\limits_{i=1}^n\left(\mathrm{y}_{i} - \mathrm{\hat{y}}{}_{i}\right)^{2} + \lambda \sum\limits^{p}_{j = 1} \boldsymbol{\beta}^{2}{}_{i}$$
(3)

where:

Variable

Description

yi

Actual output

yˆi

Predicted output

βj

Coefficients of the regression

λ

Regularization parameter

Equation (3) calculates the Security Parameter Index (SPS) by summing the product of the Vulnerability Scores (VSi) and their respective Weighting Factors (wi) for each security parameter.

The regularization term \(\lambda{\textstyle\;\sum_{j=1}^p}\;\beta_i^2\) penalizes large coefficients and helps to prevent overfitting.

In order to generate crisp values from a range, the idea is to use the regression model to predict a single value for each set of input subparameters that correspond to the range of values. A dataset with input parameter values and corresponding target values which represent the crisp values is used in the regression model which is known as the trained Ridge Regression model with the subparameter values and corresponding target values and chose an appropriate value for the regularization parameter λ and used the trained ridge regression model to predict crisp values for new sets of input parameters. The output of the model will be a single predicted value for each input and adjustment in the model is possible in order to improve the accuracy.

The trust evaluation using a Fuzzy inference system aims to assess the trustworthiness of Cloud Service Providers (CSPs) in cloud computing environments. By considering key Quality of Service (QoS) parameters such as Security, Privacy, Dynamicity, Data Integrity, and Performance, the fuzzy logic system calculates trust values for each CSP. This method enables a comprehensive evaluation of CSPs based on multiple trust parameters, providing a nuanced understanding of their trustworthiness in the cloud ecosystem.

The major steps involved in this research work are the following. The key trust parameters, including Security, Privacy, Dynamicity, Data Integrity, and Performance, are identified for evaluating CSPs and their subparameters are also listed as shown in Fig. 1. Depending upon the CSP and its applications the parameters, subparameters, sub sub-parameters, and their weighted score will vary. A fuzzy logic system is employed to process the trust parameters starting from the leaf node parameters 1 to major QoS parameters to execute iteratively and finally calculate the trust score(root node of the tree) for each CSP. Trust score derived based on the QoS characteristics of each CSP, reflecting their overall trustworthiness in the cloud environment. The results obtained from this trust evaluation method will provide valuable insights into the trustworthiness of CSPs in cloud computing. By analyzing the trust values derived from the fuzzy logic system, it is possible to rank CSPs based on their trust score across key trust parameters. The fuzzy-based trust score calculation offers a systematic and objective approach to evaluating CSPs, enabling Cloud Service Users (CSUs) to make informed decisions when selecting a trusted CSP for their cloud services [42].

Fig. 1
figure 1

Parameter tree for trust score calculation

Architecture of digital twin enabled CSP integrated with the Fuzzy inference system

Creating a digital twin for a cloud service provider entails digitally reproducing the infrastructure, services, and operations of their cloud platform. This can assist the provider in better effectively monitoring, managing, optimizing, and troubleshooting their services. The architecture for a digital twin for a cloud service provider with Fuzzy logic used for the computation of trust score is given in the following architectural diagram Fig. 2. The basic components in the system consist of a cloud consumer, a cloud service provider, a digital twin model for the corresponding CSP, and the fuzzy inference system for trust calculation. In general cloud service providers consist of various components which include compute services, storage services, networking services, identity and management, security and compliance services, etc. Workloads and applications can be operated with the infrastructure by the compute services. These services usually comprise serverless computing solutions, containers, and virtual machines (VMs). Cloud data storage is made possible by storage services. They have options for file storage, block storage, object storage, and so forth. A virtual network environment can be created and managed with the help of networking services. They include content delivery networks (CDNs), traffic management, firewall configuration, load balancing, and safe and effective connections between virtual machines and other resources. Cloud resource access and user identities are managed by Identity and Access Management (IAM) services. They offer user authorization, auditing, and authentication features. Because they enable administrators to implement granular access controls, IAM services are essential for security. Tools for encryption, threat detection, compliance certifications, and security breach monitoring are just a few of the security and compliance services that assist in safeguarding cloud resources and data. Developers may run programs without provisioning or maintaining servers by the usage of serverless computing services. Machine resource allocation is dynamically managed by the cloud provider. Application development, deployment, and administration are made easier by DevOps and deployment services. They include version control, infrastructure as code (IaC), and continuous integration and deployment (CI/CD) solutions. These services increase development efficiency by automating procedures. Management and monitoring services provide tools to monitor the performance, health, and usage of cloud resources. They often include dashboards, alerts, and analytics to help manage and optimize cloud environments. These services collectively form a comprehensive cloud ecosystem. Whenever a cloud customer requests a service, it will connect to the physical cloud service provider and the same inputs will be provided to the digital twin model (The input means whatever services the customer requested from the cloud service provider, and each request for a service from the customer is considered as a transaction between the customer and CSP). This will execute the same services and will also analyze the various trust parameters in each transaction. The trust parameters value will be passed to the Fuzzy inference system attached to the digital twin. The digital twin of the CSP receives the necessary information, values, and credentials when the cloud customer tries to access services. The components and structure of the real CSP are mirrored in the digital counterpart. The digital twin fuzzifies the credentials, data, and values it receives. Fuzzification is the process of employing membership functions to transform clear input data into fuzzy collections.

Fig. 2
figure 2

Architecture Diagram for Digital Twin model with Fuzzy inference system

The fuzzy system is an essential component in the predictive digital twin-driven trust model that evaluates and establishes the reliability of cloud service providers. Fuzzification, fuzzy inference system, and defuzzification procedures are all combined in the fuzzy system to transform crisp values from the ridge regression model into fuzzy sets depending on membership functions and linguistic variables. Fuzzy inference rules are used to calculate the trust score, and fuzzy input sets are represented by triangular membership functions. The data from several rules are combined to create a single fuzzy output that represents the trust value. The digital twin’s perception of the CSP’s level of trustworthiness is reflected in the clear trust value that is obtained after defuzzifying this fuzzy output. The fuzzy system offers a thorough assessment of the CSP’s dependability, assisting cloud users in making well-informed judgments about service use. It is powered by a predetermined set of rules that are customized to different cloud service offerings. The fuzzy system is continuously improved by feedback and historical data, ensuring that trust computations are accurate and pertinent in the dynamic cloud environment.

Taking into account the fuzzy rules, the fuzzy inference engine ascertains the extent to which the input data belongs in each fuzzy set. Relationships between the antecedent and consequent components of fuzzy rules are established through the use of implication techniques. To obtain a thorough inference, the outcomes of various rules are combined. After that, defuzzification transforms the combined fuzzy values back into a crisp one. The CSP’s degree of trust is reflected in this clear value. The CSP’s level of trustworthiness is then measured using the defuzzified trust value.

The proposed work uses the fuzzy system incorporated within the relevant digital twin to calculate the trust value of a Cloud service provider. Fuzzification, fuzzy inference system, and defuzzification are the three processes that make up the fuzzy system. The crisp value produced by the linear ridge regression is used as input for a number of subparameters during the fuzzification phase wherein our suggested work makes use of triangular membership functions and transforms the crisp set in terms of linguistic variable and membership function. The fuzzy set, which is an ordered pair of values and membership values, is the result of the fuzzification process. The input parameter’s range is already set before starting the fuzzification process. Triangular member functions and linguistic variables are used to produce the fuzzy input set. Next, the fuzzy inference rules must be applied and the min operation can be used if multiple rules are activated. The final step is to aggregate the data to obtain a single fuzzy output for the trust score.

Based on the input data, the digital twin’s perception of the CSP’s dependability, security, and general legitimacy is reflected in this trust value. The cloud consumer’s decision-making process may be influenced by the trust value. Based on this information, the consumer can decide whether or not to proceed with using the CSP’s services. The accuracy of trust computations can be increased by updating the digital twin and FIS in response to feedback and historical data. The fuzzy inference system uses the digital twin as a virtual mirror of the CSP to examine incoming data and determine a trust value. This trust value is based on a predetermined set of guidelines and factors that are particular to the various cloud service offerings from the CSP. The goal of the entire procedure is to give the cloud customer a comprehensive evaluation of the CSP’s reliability prior to using its services.

The trust score can be generated for various parameter values. The trust value in each time vary drastically so that the digital twin can generate the alerts if the trust score is less than a specified threshold. For each service request from the client, depending upon the service provided by the CSP, the trust parameter values are determined by the fuzzy system, and trust score is calculated.

The cloud service provider will make the necessary modifications in their organization, architecture, and access policies in order to improve their trust score. To improve the trust score of a cloud service provider which involves a comprehensive approach that includes organizational, architectural, and policy modifications. Architecturally, strengthening network security with firewalls, intrusion detection and prevention systems, encryption protocols, and secure network segmentation, enhancing data protection through encryption, access controls, and secure storage practices, and ensuring scalability and redundancy for service continuity are essential steps. Revising access policies by implementing robust Identity and Access Management (IAM) policies, such as role-based access controls (RBAC), multi-factor authentication (MFA), and single sign-on (SSO), defining granular authorization policies, and setting up comprehensive monitoring and logging mechanisms to track and investigate user activities are crucial. Continuous improvement can be achieved by conducting regular security audits, vulnerability assessments, and penetration testing, soliciting feedback from customers and security experts, and providing ongoing training and awareness programs. By integrating these strategies, a cloud service provider can significantly enhance its trust score, build credibility with customers, and demonstrate a strong commitment to security, reliability, and trustworthiness in the delivery of cloud services.

The detailed working of the entire system is depicted using a sequence diagram shown in Fig. 3. The sequence diagram illustrates the interactions between different entities involved in evaluating and obtaining a trust score for Cloud Service Providers (CSPs). The entities include the Cloud Service User (CSU), the Cloud Service Provider (CSP), the Digital Twin of the CSP, and the Fuzzy Inference System. The process begins with the CSU sending a request for cloud services to the CSP. The CSP is attached with a digital twin, of the CSP’s infrastructure. This digital twin is used to simulate and analyze the CSP’s environment. The digital twin is utilized to perform vulnerability testing. This involves assessing the CSP’s infrastructure for potential security vulnerabilities and weaknesses. The results of the vulnerability testing are sent back to the CSP. These results include details about any detected vulnerabilities and their severity. Using the vulnerability test results, the CSP normalizes the parameter values and normalization is necessary to ensure that the data is consistent and comparable. The normalized data is then sent to the Fuzzy Inference System. The system performs fuzzification, which involves converting crisp input values into fuzzy values using triangular membership functions. This step translates the precise input data into a range of values that represent uncertainty and imprecision. The Fuzzy Inference System applies predefined fuzzy rules to the fuzzified data. These rules are designed to evaluate various conditions and derive a conclusion based on the input data. After the fuzzy rules are applied, the system performs defuzzification. This process converts the fuzzy values obtained from the inference process back into crisp values. In this context, it results in a clear trust score, which quantifies the level of trustworthiness of the CSP. The CSP evaluates the outcomes of the fuzzy rules applied by the Fuzzy Inference System. This evaluation helps in understanding the rationale behind the trust score. The calculated trust score is returned to the CSU. This score helps the CSU assess the trustworthiness of the CSP and make informed decisions regarding their cloud service selection. The CSU may provide feedback based on their experience and any historical data collected over time. This feedback loop helps in the continuous refinement of the trust evaluation process, ensuring that it remains accurate and up-to-date. This sequence diagram outlines a systematic approach to evaluating the trustworthiness of CSPs using digital twins and fuzzy logic. It integrates testing, normalization, fuzzy inference, and user feedback to deliver a comprehensive trust score, aiding cloud service users in making informed decisions. The algorithm that is executing within the digital twin is depicted in Algorithm 1. The digital twin will be tested with different penetration tests mentioned above and it can be mapped to different trust parameters. A weightage value is assigned to each parameter depending on the relevance score of that parameter. The security parameter index can be calculated as the sum of the product of its weightage and the vulnerability score of each parameter. It is passed through the Fuzzy inference system to generate the final trust score.

Fig. 3
figure 3

Sequence Diagram for Digital Twin model with Fuzzy inference system for trust score calculation

figure a

Algorithm 1 An algorithm for trust score calculation

Results and discussion

Utilizing the Matlab simulation toolbox and the ESA control toolbox we can construct the simulation logic that would regulate the behavior of the digital twin [44]. All the inputs given to the Cloud service provider can be passed to the digital twin in order to obtain the initial subparameter values and a final trust score is generated from the digital twin. The surface viewer in Fig. 4 provides a comprehensive view of how various factors—such as performance, security, privacy, availability, and more—interact and influence key metrics like trust score, data integrity, and system performance. This analysis is crucial, where a delicate balance between various attributes must be maintained to ensure reliable and secure service delivery. Each plot provides a visual representation of the complex relationships between the variables, allowing stakeholders to make informed decisions about prioritizing certain aspects over others, depending on the specific requirements and constraints of their system. As given in the trust parameter tree Fig. 1, from each subparameter value, the upper-level parameter values are calculated. The Fig. 4a shows the variation in output parameter ’attack prevention’ value depending upon the input subparameter ’collusion attack’ and ’sybil attack’. This plot shows how the intensity of attacks varies with different levels of collusion and Sybil attacks. The surface indicates regions where attack intensity is higher, possibly highlighting vulnerability combinations. Figure 4b shows the variation in output parameter ’data integrity’ value depending upon the input subparameter ’availability’ and ’compliance checking’. It could be used to assess the trade-offs between these factors and their impact on maintaining data integrity. Similar way Fig. 4c shows the variation in output parameter ’dynamicity’ depending on ’interoperability’ and ’ease of use’. The plot might be used to understand how enhancing interoperability and ease of use impacts the dynamic nature of the system. The relationship between the input parameters ’bandwidth’ and ’availability’ with the output parameter ’performance’ is shown in Fig. 4d. Similarly the variation in the output parameter ’privacy’ depending upon the input parameter ’access control’ and ’auditability’ is shown in Fig. 4e. It could be useful in designing systems that balance security with privacy concerns. The Fig. 4f shows illustrates how security levels and system adaptability impact trustworthiness. This is very useful for the CSP to improve the security levels to enhance the trust score of CSP. Similarly Fig. 4g, h, i illustrate how to maintain balance among the trust parameters like security, privacy, performance, and data integrity to maintain a minimum trust score. The mapping between the input variables and the output variable is visible from the control surfaces and it depicts how changes in the input variables affect the output variable value. The control surfaces for each parameter reveal insights into the system’s sensitivity to changes in the input variables and can be used to optimize the system’s trust score by adjusting the fuzzy rules or membership functions.

Fig. 4
figure 4

Control surface of various trust parameters for trust score calculation

The scatter plots clearly define the variation in the trust score for a particular CSP in different trust parameter inputs and it is given in the following Figs. 5 and 6. It represents how the trust score varies for each trust parameter value. In Fig. 5, it is clear that the trust score reaches a peak value when both the security and privacy values are high. Also if one of the parameter values(say security) is improved to above a particular range, there is a peak in trust score. In Fig. 6, the effect of dynamicity and performance on trust score is plotted where a very small variation in the performance helps to make the trust score in a peak range. Depending upon this, the trust score can be improved by varying the security features of the system. For that the vulnerable areas can be identified from the different penetration tests and take preventive measures against these attacks. The security-related checks and vulnerability assessment can be done periodically on the digital twin so that the system can be upgraded with more security features. The influence of the parameter can be evaluated with respect to the generated trust score and each time parameter threshold can be set to a minimum value.

Fig. 5
figure 5

Scatter plotter for Trust score with security and privacy

Fig. 6
figure 6

Scatter plotter for Trust score with dynamicity and performance

A sample range of trust parameter value can be obtained as in the Table 3 so that the trust score will have a value greater than 90. In that table, the expected range indicates the parameter value obtained from penetration testing and vulnerability assessment. The influence score is obtained by repeated testing on the data with a n number of parameter values using ANOVA function in Matlab. The influence score is represented using a bar graph is shown in Fig. 7. This bar chart makes it easy to visually compare the influence scores, providing a quick reference to understand which parameters have the highest impact. The visualization complements the numerical data in the table by offering a more intuitive way to grasp the relative importance of each parameter. Using ANOVA the influence score can be calculated as the F- statistic value. The F-statistic is then calculated as:

$$E=\frac{\left(SSB/\left(K-1\right)\right)}{\left(SSW/\left(N-K\right)\right)}$$
(4)
Table 3 Expected parameter range and score
Fig. 7
figure 7

Influence scores of trust parameters

Where k-1 is the degrees of freedom for the between-group variance and N-k is the degrees of freedom for the within-group variance. SSW and SSB are the within-group variance and between-group variance respectively.

As a result, using this trust score calculation method in Digital Twin, the relationship between each major parameter to trust score can be identified. Since the influence score of the security parameter is 9.1 and there is a deviation in the total trust score for each variation in security value, we can say that the impact of parameter security is high and the preferable range value is greater than 75. If the value is below 75, the total trust score of the CSP will be very low. The influence score is 9.1 means a very small change in the security parameter value will make a large change in the total trust score of the CSP. In the case of trust parameter privacy, from the scatter plot and control surface, it is clear that its influence in the total trust score is very high( influence score is 9.73) so its range can be very high(>90). In a cloud service provider, since the performance is a crucial factor that is visible from the control surface its range also can be very high(>90). From the control surface, it is clear that dynamicity and data integrity are the less influencing factors on trust score and their influence score is low compared with other major parameters.

The trust value is determined for each set of fuzzified inputs in a digital twin in the same manner. The digital twin’s trust database contains the trust values. When a cloud user requests a resource, the system supervisor can use a fuzzy system to calculate the trust value within the digital twin and obtain the resource that is deemed the most trustworthy.

Validation of trust model using AVISPA

Formal validation is a fundamental component of cloud computing trust models, acting as a safeguard for the reliability and consistency of the suggested techniques. Validation procedures, like the ones made possible by programs like AVISPA, provide a methodical way to examine the security protocols and trust calculations built into the model. They also help to find potential weaknesses, confirm the precision of security measures, and validate the system’s overall reliability. This validation creates a foundation of confidence and dependability for cloud ecosystem stakeholders while also strengthening the trust model’s resilience against new threats and hazards. The official confirmation of the trust model’s validity not only strengthens its credibility but also emphasizes the dedication to maintaining strict security guidelines and guaranteeing the privacy, availability, and integrity of cloud services. By using formal validation methodologies, academics and practitioners can confidently and precisely navigate the complicated world of cloud security, ultimately promoting a climate of trust and assurance in cloud computing settings. The AVISPA tool can identify various types of attacks that are common in trust models in cloud computing [45]. It includes Man in-the-middle attacks (MitM), denial-of-service attacks, data breaches, insider threats, phishing attacks, malware infections, SQL injection, cross-site scripting(XSS), brute force attacks, social engineering, etc. The overall security problems analyzed and attacks detected are given in Table 4.

Table 4 AVISPA analysis result

Table 5 provides a concise summary of the vulnerabilities detected by AVISPA in the trust model’s security protocols and trust calculations within the cloud computing environment. Each row specifies a security property, the vulnerability identified, and a corresponding recommendation to address the issue and strengthen the trust model’s security posture which is to be addressed urgently in this fuzzy-based trust evaluation system.

Table 5 Major vulnerabilities detected using AVISPA tool

A detailed case study

A cloud service provider is being considered by a medium-sized business to host its customer relationship management (CRM) system. The organization considers performance, security, and data privacy to be important considerations when making decisions. The trust model evaluates the cloud service providers based on key parameters such as security, performance, privacy, and data integrity. Each parameter is assigned a weight based on the company’s priorities. The control surface diagram can be generated for various parameters depending upon the preference of the business organization and it can visually represent the relationship between the parameters. For eg. if we consider the control surface diagram between the parameters of security and performance, it shows how changes in security measures impact the overall performance of the cloud service. Scatter plots generated for this case can demonstrate the correlation between the various parameters. For eg. if a scatter plot between access control and privacy is analyzed by the business organization, the company can understand how strengthening access control policies influences data privacy levels. Using the fuzzy inference system, the trust model calculates a trust score for each potential cloud service provider. The trust score reflects the provider’s overall reliability and suitability for hosting the CRM system. Based on the trust scores generated by the model, the company can make an informed decision on selecting the most trustworthy cloud service provider that aligns with its security and performance requirements.

Comparative analysis with other trust models

Table 6 shows the detailed comparison of the digital twin-based trust evaluation model with all the other prominent trust models that can be employed in a cloud system. The Predictive Digital Twin Trust Model stands out as the only method offering real-time trust calculation capabilities, enabling dynamic assessment based on up-to-date data using Fuzzy inference system to generate the most accurate trust scores. The main advantage is that the model allows for continuous monitoring and adjustment of trust scores based on evolving security parameters and real-time updates. It also provides a holistic evaluation by integrating fuzzy inference systems with digital twin technology. By leveraging digital twins, the model offers a virtual replica for analyzing CSP behavior and enhancing trust assessment accuracy.

Table 6 Comparative analysis of trust evaluation methods

Conclusion and future enhancements

In conclusion, this research paper has provided a comprehensive overview of trust models in cloud computing and proposed a novel methodology for building a trust model using digital twins for Cloud Service Providers (CSPs). The importance of trust in cloud computing was highlighted, emphasizing the need for reliable methods to assess CSP trustworthiness.

Various existing trust models, including agreement-based, SLA-based, certificate-based, feedback-based, domain-based, prediction-based, and reputation-based models, were discussed. These models serve as the foundation for understanding the complexities of trust assessment in cloud computing.

The proposed methodology leverages digital twins integrated with a fuzzy inference system to calculate the trust score of CSPs based on various trust-related parameters. The architecture of the digital twin with the fuzzy inference system was explained in detail, illustrating how it processes security parameter values obtained through penetration testing mechanisms.

Through this methodology, a range of values for each security parameter is converted into a crisp value using a linear ridge regression function. These values are then passed to the fuzzy inference system to compute a final trust score for the CSP. The outputs of the fuzzy inference system, including the trust scores for different security parameter inputs, were also presented.

Although our suggested trust model provides insightful information for evaluating cloud service providers, there are a few important drawbacks that need to be noted. First, because cloud systems are dynamic and reliable real-time data is readily available, and gathering data for trust parameters—like security and performance, etc—may provide difficulties. Second, the quality of the input data and the efficacy of the stated membership functions have a major impact on the accuracy of the fuzzy inference system used to calculate trust scores. The results of the trust score could be misleading if there are errors in the data input or ambiguous rule definitions. Furthermore, significant consideration must be given to the model’s scalability in large-scale cloud settings. The computational complexity of the fuzzy inference system may affect the model’s responsiveness and efficiency as the number of parameters and data points rises. Furthermore, because different cloud service providers may prioritize trust criteria differently based on their unique business requirements, the generalizability of the trust model across various industry sectors and cloud service providers may be limited. The proposed digital twin-based fuzzy inference system (FIS) is evaluated based on key performance metrics such as accuracy, precision, recall, F1-score, and trust score calculation time. The proposed digital twin-based FIS demonstrates superior performance with an accuracy of 95.2%, precision of 94.5%, and recall of 96.0%, significantly outperforming the other models in terms of both effectiveness and efficiency, as indicated by its faster trust score calculation time of 150 ms. In contrast, existing models, such as the MCDM-based and agreement-based models, show lower accuracy and longer calculation times, highlighting their limitations in dynamic environments. This comprehensive comparison underscores the advantages of integrating digital twins with fuzzy logic for enhanced trust assessment in cloud service providers.

Furthermore, non-technical users may find it difficult to understand the trust scores that the model generates, which could impede the model’s uptake and usefulness. Finally, the fuzzy inference system’s reliance on historical data for training could introduce bias or out-of-date assumptions, which would limit the model’s capacity to adjust to changing cloud security and performance requirements.

Overall, this research contributes to the advancement of trust assessment in cloud computing by proposing a robust methodology that enhances decision-making processes in selecting CSPs. Future work in this area could focus on further refining the methodology and evaluating its effectiveness in real-world cloud environments avoiding all the drawbacks given.

Availability of data and materials

No datasets were generated or analysed during the current study.

References

  1. Singh S & Kumar D (2023) Vulnerability of cyber security in cloud computing environment. In 2023 4th International conference on electronics and sustainable communication systems (ICESC) 572–580. https://doi.org/10.1109/ICESC57686.2023.10193087

  2. Shynu P, Singh KJ (2016) A comprehensive survey and analysis on access control schemes in cloud environment. Cybern Inf Technol 16:19–38

    MathSciNet  Google Scholar 

  3. Weil T (2018) Taking compliance to the cloud—using iso standards (tools and techniques). IT Prof 20:20–30. https://doi.org/10.1109/MITP.2018.2877312

    Article  Google Scholar 

  4. Mohammed AM, Morsy EI & Omara FA (2018) Trust model for cloud service consumers. In 2018 International conference on Innovative Trends in Computer Engineering (ITCE) 122–129. https://doi.org/10.1109/ITCE.2018.8316610

  5. Lakshmi DV et al (2023) Approaches of security in cloud computing. In 2023 3rd International Conference on Smart Data Intelligence (ICSMDI) 211–215.  https://doi.org/10.1109/ICSMDI57622.2023.00047

  6. Hassan H, El-Desouky AI, Ibrahim A, El-Kenawy E-SM, Arnous R (2020) Enhanced qos-based model for trust assessment in cloud computing environment. IEEE Access 8:43752–43763. https://doi.org/10.1109/ACCESS.2020.2978452

    Article  Google Scholar 

  7. Zheng X, Xu LD, Chai S (2017) Qos recommendation in cloud services. IEEE Access 5:5171–5177. https://doi.org/10.1109/ACCESS.2017.2695657

    Article  Google Scholar 

  8. Sun L, Dong H, Hussain FK, Hussain OK, Chang E (2014) Cloud service selection: State-of-the-art and future research directions. J Netw Comput Appl 45:134–150

    Article  Google Scholar 

  9. Hassan H, El-Desoky A & Ibrahim A (2017) An economic model for cloud service composition based on user’s preferences. In 2017 13th International Computer Engineering Conference (ICENCO)195–201 (IEEE)

  10. Xiahou J, Lin F, Huang Q, Zeng W (2018) Multi-datacenter cloud storage service selection strategy based on ahp and backward cloud generator model. Neural Comput Appl 29:71–85

    Article  Google Scholar 

  11. Damera V, Nagesh A, Nagaratna M (2020) Trust evaluation models for cloud computing. Int J Sci Technol Res 9:1964–1971

    Google Scholar 

  12. Spanoudakis G, Damiani E & Maña A (2012) Certifying services in cloud: The case for a hybrid, incremental and multi-layer approach. In 2012 IEEE 14th International Symposium on High-Assurance Systems Engineering 175–176 IEEE

  13. Sunyaev A, Schneider S (2013) Cloud services certification. Commun ACM 56:33–36

    Article  Google Scholar 

  14. Huang J, Nicol DM (2013) Trust mechanisms for cloud computing. J Cloud Comput Adv Syst Appl 2:1–14

    Article  Google Scholar 

  15. Muñoz A & Maña A (2013) Bridging the gap between software certification and trusted computing for securing cloud computing. In 2013 IEEE ninth world congress on services, 103–110 IEEE

  16. Mehdi M, Bouguila N, Bentahar J (2014) Probabilistic approach for qos-aware recommender system for trustworthy web service selection. Appl Intell 41:503–524

    Article  Google Scholar 

  17. Qu C & Buyya R (2014) A cloud trust evaluation system using hierarchical fuzzy inference system for service selection. In 2014 IEEE 28th International conference on advanced information networking and applications, 850–857 IEEE

  18. Ma H, Hu Z, Li K, Zhang H (2016) Toward trustworthy cloud service selection: A time-aware approach using interval neutrosophic set. J Parallel Distributed Comput 96:75–94

    Article  Google Scholar 

  19. Ramaswamy A, Balasubramanian A, Vijaykumar P & Varalakshmi P (2011) A mobile agent based approach of ensuring trustworthiness in the cloud. In 2011 International Conference on Recent Trends in Information Technology (ICRTIT) 678–682 IEEE

  20. Mouratidis H, Islam S, Kalloniatis C, Gritzalis S (2013) A framework to support selection of cloud providers based on security and privacy requirements. J Syst Softw 86:2276–2293

    Article  Google Scholar 

  21. Kanwal A, Masood R, Shibli MA, Mumtaz R (2015) Taxonomy for trust models in cloud computing. The Comput J 58:601–626

    Article  Google Scholar 

  22. Prabu Ragavendiran SD, Sowmiya N SP. Analysis of Trust Score of CSPS by Comparing Service Broker Policies and Load Balancing Policies using Cloud Analyst and Fuzzy Inference System. Int J Eng Res Technol (IJERT) RTICCT. 2019;7(01).

  23. Parmar B, Chauhan S (2018) Trusted service selection in cloud computing using topsis

    Google Scholar 

  24. Wang Y, Wen J, Wang X, Tao B, Zhou W. A cloud service trust evaluation model based on combining weights and gray correlation analysis. Secur Commun Netw. 2019;2019(1):2437062.

  25. Mujawar TN, Bhajantri LB (2022) Behavior and feedback based trust computation in cloud environment. J King Saud Univ Inf Sci 34:4956–4967

    Google Scholar 

  26. Hussain A, Chun J, Khan M (2020) A novel customer-centric methodology for optimal service selection (moss) in a cloud environment. Futur Gener Comput Syst 105:562–580

    Article  Google Scholar 

  27. Muñoz A, Gonzalez J, Maña A (2012) A performance-oriented monitoring system for security properties in cloud computing applications. The Comput J 55:979–994

    Article  Google Scholar 

  28. Tao F, Zhang M & Nee A (2019) Digital twin and cloud, fog, edge computing, 171–181. Elsevier

  29. Xu Y, Sun Y, Liu X, Zheng Y (2019) A digital-twin-assisted fault diagnosis using deep transfer learning. IEEE Access 7:19990–19999. https://doi.org/10.1109/ACCESS.2018.2890566

    Article  Google Scholar 

  30. Liu M, Fang S, Dong H, Xu C (2021) Review of digital twin about concepts, technologies, and industrial applications. J Manuf Syst 58:346–361. https://doi.org/10.1016/j.jmsy.2020.06.017. (DT3digital twin concepts can be taken)

    Article  Google Scholar 

  31. Zheng Y, Yang S, Cheng H (2019) An application framework of digital twin and its case study. J Ambient Intell Humaniz Comput 10:1141–1153

    Article  Google Scholar 

  32. Grieves M, Vickers J (2017) Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems. Transdiscipl. perspectives on complex systems: New findings approaches. pp 85–113

    Chapter  Google Scholar 

  33. Schluse M & Rossmann J (2016) From simulation to experimentable digital twins: Simulation-based development and operation of complex technical systems. In 2016 IEEE international symposium on systems engineering (ISSE) 1–6 IEEE

  34. Schluse M, Priggemeyer M, Atorf L, Rossmann J (2018) Experimentable digital twins—streamlining simulation-based systems engineering for industry 4.0. IEEE Trans Ind Inform 14:1722–1731

    Article  Google Scholar 

  35. Delbrügger T, Rossmann J (2019) Representing adaptation options in experimentable digital twins of production systems. Int J Comput Integr Manuf 32:352–365

    Article  Google Scholar 

  36. Madni AM, Madni CC, Lucero SD (2019) Leveraging digital twin technology in model-based systems engineering. Systems 7:7

    Article  Google Scholar 

  37. Bao J, Guo D, Li J, Zhang J (2019) The modelling and operations for the digital twin in the context of manufacturing. Enterp Inf Syst 13:534–556

    Article  Google Scholar 

  38. Ullah AS (2019) Modeling and simulation of complex manufacturing phenomena using sensor signals from the perspective of industry 4.0. Adv Eng Inform 39:1–13

    Article  Google Scholar 

  39. Alam KM, Saddik AE (2017) C2ps: A digital twin architecture reference model for the cloud-based cyber-physical systems. IEEE Access 5:2050–2062. https://doi.org/10.1109/ACCESS.2017.2657006

    Article  Google Scholar 

  40. Halenarova, L., Halenar, I. & Tanuska, P.  (2022) Digital twin proposal using the matlab-stateflow model and docker containers. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/KI55792.2022.9925931

  41. Zhang, H., Luo, T. & Wang, Q. (2023) Adaptive digital twin server deployment for dynamic edge networks in iot system. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICCC57788.2023.10233465

  42. John J, John Singh K. Trust value evaluation of cloud service providers using fuzzy inference based analytical process. Sci Rep. 2024;14(1):18028.

  43. Cule E, De Iorio M (2013) Ridge regression in prediction problems: automatic choice of the ridge parameter. Genet Epidemiol 37:704–714

    Article  Google Scholar 

  44. Sanchez, A., Zamiri, E. & Castro, A. D.  2023 Digital controllers design using the esa control toolbox in matlab simulink. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ESPC59009.2023.10298161

  45. Muñoz, A., Maña, A. & Serrano, D.  (2009) Avispa in the validation of ambient intelligence scenarios. In 2009 International Conference on Availability, Reliability and Security 420–426 IEEE

Download references

Additional information

All the data used is included in the manuscript itself.

The corresponding author is responsible for submitting a competing interests statement on behalf of both the authors of this paper.

Conflict of Interest

Please check the following as appropriate:

â—¦ All authors have participated in (a) conception and design, or analysis and interpretation of the data; (b) drafting the article or revising it critically for important intellectual content; and (c) approval of the final version.

â—¦ The authors have no affiliation with any organization with a direct or indirect financial interest in the subject matter discussed in the manuscript.

â—¦ The following authors have affiliations with organizations with direct or indirect financial interest in the subject matter discussed in the manuscript:

John Singh K

Professor

SCORE, Vellore Institute of Technology, Vellore

Jomina John

Research Scholar

SCORE, Vellore Institute of Technology, Vellore

Funding

Open access funding provided by Vellore Institute of Technology. Funding information is not applicable / No funding was received.

Author information

Authors and Affiliations

Authors

Contributions

Jomina John completed the digital twin modeling, Fuzzy inferred trust score calculation and the entire fuzzification process. Then completed the research paper drafting and formatting in the implementation part. John Singh K  completed the defuzzification, trust score verification and comparative analysis. The completed the comparative analysis part of the research paper and verification.

Corresponding author

Correspondence to John Singh K.

Ethics declarations

Ethics approval and consent to participate

This article does not contain any studies with human participants or animals performed by any of the authors.

Consent for publication

Informed consent was obtained from all individual participants included in the study.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

John, J., K, J.S. Predictive digital twin driven trust model for cloud service providers with Fuzzy inferred trust score calculation. J Cloud Comp 13, 134 (2024). https://doi.org/10.1186/s13677-024-00694-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13677-024-00694-w

Keywords