[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
applsci-logo

Journal Browser

Journal Browser

Cloud Computing Beyond

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 54509

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science & Engineering, College of Software, Kyung Hee University, Seoul 02447, Republic of Korea
Interests: cloud computing; the Internet of Things; future internet; distributed real-time systems; mobile computing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Cloud computing has become an important infrastructure in the ICT industry. Recently, SaaS, PaaS, and IaaS have been used mostly in companies and in personal computing. Many cloud services require interoperable services in order to extend the service capability and business market. Additionally, AI (artificial intelligence)-based applications are emerging in many industries. Cloud computing is utilized very well and provides very fast responses for training data in AI applications. Furthermore, cloud services are migrating to edge nodes to support real-time services as well as AI applications. Conventional virtual-machine-based cloud services are challenged by many emerging issues. Thus, distributed cloud—the distribution of cloud capabilities to the edge of the network—is considerable as a new paradigm integrating with the edge cloud, where resources are virtualized and shared to CSPs (cloud service providers) using high-performance 5G networks. Future computing with cloud computing infrastructures called “cloud computing beyond” needs to solve the following technical challenges (listed in the keywords). Other challenging topics are also welcome to this Special Issue.

Prof. Dr. Eui-Nam Huh
Guest Editor

Keywords

  • real-time cloud services
  • cloud infrastructure for AI
  • distributed cloud with 5G
  • parallel and distributed deep learning
  • edge cloud resource provisioning
  • load balancing in edge cloud
  • micro-services-based services and systems
  • container management
  • offloading
  • security
  • trust and forensics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 4708 KiB  
Article
Containerized Microservices Orchestration and Provisioning in Cloud Computing: A Conceptual Framework and Future Perspectives
by Abdul Saboor, Mohd Fadzil Hassan, Rehan Akbar, Syed Nasir Mehmood Shah, Farrukh Hassan, Saeed Ahmed Magsi and Muhammad Aadil Siddiqui
Appl. Sci. 2022, 12(12), 5793; https://doi.org/10.3390/app12125793 - 7 Jun 2022
Cited by 17 | Viewed by 7010
Abstract
Cloud computing is a rapidly growing paradigm which has evolved from having a monolithic to microservices architecture. The importance of cloud data centers has expanded dramatically in the previous decade, and they are now regarded as the backbone of the modern economy. Cloud-based [...] Read more.
Cloud computing is a rapidly growing paradigm which has evolved from having a monolithic to microservices architecture. The importance of cloud data centers has expanded dramatically in the previous decade, and they are now regarded as the backbone of the modern economy. Cloud-based microservices architecture is incorporated by firms such as Netflix, Twitter, eBay, Amazon, Hailo, Groupon, and Zalando. Such cloud computing arrangements deal with the parallel deployment of data-intensive workloads in real time. Moreover, commonly utilized cloud services such as the web and email require continuous operation without interruption. For that purpose, cloud service providers must optimize resource management, efficient energy usage, and carbon footprint reduction. This study presents a conceptual framework to manage the high amount of microservice execution while reducing response time, energy consumption, and execution costs. The proposed framework suggests four key agent services: (1) intelligent partitioning: responsible for microservice classification; (2) dynamic allocation: used for pre-execution distribution of microservices among containers and then makes decisions for dynamic allocation of microservices at runtime; (3) resource optimization: in charge of shifting workloads and ensuring optimal resource use; (4) mutation actions: these are based on procedures that will mutate the microservices based on cloud data center workloads. The suggested framework was partially evaluated using a custom-built simulation environment, which demonstrated its efficiency and potential for implementation in a cloud computing context. The findings show that the engrossment of suggested services can lead to a reduced number of network calls, lower energy consumption, and relatively reduced carbon dioxide emissions. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>IaaS-PaaS-SaaS comparison [<a href="#B4-applsci-12-05793" class="html-bibr">4</a>].</p>
Full article ">Figure 2
<p>Difference between VM and containers.</p>
Full article ">Figure 3
<p>MiCADO Architecture; Adapted with permission from [<a href="#B24-applsci-12-05793" class="html-bibr">24</a>].</p>
Full article ">Figure 4
<p>A death star graph showing inter-relationships among microservices [<a href="#B34-applsci-12-05793" class="html-bibr">34</a>].</p>
Full article ">Figure 5
<p>Key findings from the literature review of concepts, theories, and existing frameworks [<a href="#B6-applsci-12-05793" class="html-bibr">6</a>,<a href="#B7-applsci-12-05793" class="html-bibr">7</a>,<a href="#B8-applsci-12-05793" class="html-bibr">8</a>,<a href="#B9-applsci-12-05793" class="html-bibr">9</a>,<a href="#B10-applsci-12-05793" class="html-bibr">10</a>,<a href="#B11-applsci-12-05793" class="html-bibr">11</a>,<a href="#B12-applsci-12-05793" class="html-bibr">12</a>,<a href="#B13-applsci-12-05793" class="html-bibr">13</a>,<a href="#B14-applsci-12-05793" class="html-bibr">14</a>,<a href="#B15-applsci-12-05793" class="html-bibr">15</a>,<a href="#B18-applsci-12-05793" class="html-bibr">18</a>,<a href="#B19-applsci-12-05793" class="html-bibr">19</a>,<a href="#B20-applsci-12-05793" class="html-bibr">20</a>,<a href="#B23-applsci-12-05793" class="html-bibr">23</a>,<a href="#B29-applsci-12-05793" class="html-bibr">29</a>,<a href="#B33-applsci-12-05793" class="html-bibr">33</a>,<a href="#B43-applsci-12-05793" class="html-bibr">43</a>,<a href="#B44-applsci-12-05793" class="html-bibr">44</a>,<a href="#B45-applsci-12-05793" class="html-bibr">45</a>,<a href="#B46-applsci-12-05793" class="html-bibr">46</a>,<a href="#B47-applsci-12-05793" class="html-bibr">47</a>,<a href="#B48-applsci-12-05793" class="html-bibr">48</a>,<a href="#B49-applsci-12-05793" class="html-bibr">49</a>,<a href="#B50-applsci-12-05793" class="html-bibr">50</a>,<a href="#B51-applsci-12-05793" class="html-bibr">51</a>].</p>
Full article ">Figure 6
<p>Research methodology flow.</p>
Full article ">Figure 7
<p>A microservice system prototype.</p>
Full article ">Figure 8
<p>Layered representation of the proposed framework.</p>
Full article ">Figure 9
<p>Schema for intelligent partitioning.</p>
Full article ">Figure 10
<p>Resource optimization schema.</p>
Full article ">Figure 11
<p>Mutation actions schema.</p>
Full article ">Figure 12
<p>Response time over a period of 12.5 h when microservices are deployed using the arbitrary distribution model.</p>
Full article ">Figure 13
<p>Response time over a period of 12.5 h when microservices were deployed using the design-pattern distribution.</p>
Full article ">
13 pages, 419 KiB  
Article
CLAP-PRE: Certificateless Autonomous Path Proxy Re-Encryption for Data Sharing in the Cloud
by Chengdong Ren, Xiaolei Dong, Jiachen Shen, Zhenfu Cao and Yuanjian Zhou
Appl. Sci. 2022, 12(9), 4353; https://doi.org/10.3390/app12094353 - 25 Apr 2022
Cited by 5 | Viewed by 1801
Abstract
In e-health systems, patients encrypt their personal health data for privacy purposes and upload them to the cloud. There exists a need for sharing patient health data with doctors for healing purposes in one’s own preferred order. To achieve this fine-gained access control [...] Read more.
In e-health systems, patients encrypt their personal health data for privacy purposes and upload them to the cloud. There exists a need for sharing patient health data with doctors for healing purposes in one’s own preferred order. To achieve this fine-gained access control to delegation paths, some researchers have designed a new proxy re-encryption (PRE) scheme called autonomous path proxy re-encryption (AP-PRE), where the delegator can control the whole delegation path in a multi-hop delegation process. In this paper, we introduce a certificateless autonomous path proxy re-encryption (CLAP-PRE) using multilinear maps, which holds both the properties (i.e., certificateless, autonomous path) of certificateless encryption and autonomous path proxy re-encryption. In the proposed scheme, (a) each user has two public keys (user’s identity and traditional public key) with corresponding private keys, and (b) each ciphertext is first re-encrypted from a public key encryption (PKE) scheme to an identity-based encryption (IBE) scheme and then transformed in the IBE scheme. Our scheme is an IND-CPA secure CLAP-PRE scheme under the k-multilinear decisional Diffie–Hellman (k-MDDH) assumption in the random oracle model. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>Electronic Health Records Sharing with Patient’s Preferred Doctors.</p>
Full article ">Figure 2
<p>Forked By Malicious Data User.</p>
Full article ">Figure 3
<p>Data Sharing in Cloud.</p>
Full article ">
24 pages, 2794 KiB  
Article
CloudOps: Towards the Operationalization of the Cloud Continuum: Concepts, Challenges and a Reference Framework
by Juncal Alonso, Leire Orue-Echevarria and Maider Huarte
Appl. Sci. 2022, 12(9), 4347; https://doi.org/10.3390/app12094347 - 25 Apr 2022
Cited by 9 | Viewed by 3399
Abstract
The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from [...] Read more.
The current trend of developing highly distributed, context aware, heterogeneous computing intense and data-sensitive applications is changing the boundaries of cloud computing. Encouraged by the growing IoT paradigm and with flexible edge devices available, an ecosystem of a combination of resources, ranging from high density compute and storage to very lightweight embedded computers running on batteries or solar power, is available for DevOps teams from what is known as the Cloud Continuum. In this dynamic context, manageability is key, as well as controlled operations and resources monitoring for handling anomalies. Unfortunately, the operation and management of such heterogeneous computing environments (including edge, cloud and network services) is complex and operators face challenges such as the continuous optimization and autonomous (re-)deployment of context-aware stateless and stateful applications where, however, they must ensure service continuity while anticipating potential failures in the underlying infrastructure. In this paper, we propose a novel CloudOps workflow (extending the traditional DevOps pipeline), proposing techniques and methods for applications’ operators to fully embrace the possibilities of the Cloud Continuum. Our approach will support DevOps teams in the operationalization of the Cloud Continuum. Secondly, we provide an extensive explanation of the scope, possibilities and future of the CloudOps. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>Challenges in the Operationalization (Ops) of applications in the Cloud Continuum.</p>
Full article ">Figure 2
<p>CloudOps workflow for the operationalization in the Cloud Continuum.</p>
Full article ">Figure 3
<p>CloudOps reference framework for the Operationalization of applications in the Cloud Continuum.</p>
Full article ">Figure 4
<p>Sequence diagram of the proposed Cloud Ops Optimizer workflow through the reference framework.</p>
Full article ">Figure 5
<p>Sequence diagram of the proposed Cloud Deployment workflow through the reference framework.</p>
Full article ">Figure 6
<p>Sequence diagram of the proposed Cloud Self-healing workflow through the reference framework.</p>
Full article ">Figure 7
<p>Architecture of the MVP of the proposed CloudOps framework validated in an e-health scenario.</p>
Full article ">Figure 8
<p>Snippet of the application description JSON file, where the application’s NFRs are described.</p>
Full article ">Figure 9
<p>Snippet of the application description JSON file, where the best combination of infrastructural elements are suggested in through the “schema” element.</p>
Full article ">
20 pages, 3293 KiB  
Article
Analysis of Complexity and Performance for Automated Deployment of a Software Environment into the Cloud
by Marian Lăcătușu, Anca Daniela Ionita, Florin Daniel Anton and Florin Lăcătușu
Appl. Sci. 2022, 12(9), 4183; https://doi.org/10.3390/app12094183 - 21 Apr 2022
Cited by 7 | Viewed by 2694
Abstract
Moving to the cloud is a topic that tends to be present in all enterprises that have digitalized their activities. This includes the need to work with software environments specific to various business domains, accessed as services supported by various cloud providers. Besides [...] Read more.
Moving to the cloud is a topic that tends to be present in all enterprises that have digitalized their activities. This includes the need to work with software environments specific to various business domains, accessed as services supported by various cloud providers. Besides provisioning, other important issues to be considered for cloud services are complexity and performance. This paper evaluates the processes to be followed for the deployment of such a software environment in the cloud and compares the manual and automated methods in terms of complexity. We consider several metrics that address multiple concerns: the multitude of independent paths, the capability to distinguish small changes in the process structure, plus the complexity of the human tasks, for which specific metrics are proposed. We thus show that the manual deployment process is from two to seven times more complex than the automatic one, depending on the metrics applied. This proves the importance of automation for making such a service more accessible to enterprises, regardless of their level of technical know-how in cloud computing. In addition, the performance is tested for an example of an environment and the possibilities to extend to multicloud are discussed. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>The manual deployment process for the IBM Cloud.</p>
Full article ">Figure 2
<p>The automated deployment process for the IBM Cloud.</p>
Full article ">Figure 3
<p>The weighted manual deployment process.</p>
Full article ">Figure 4
<p>The weighted automated deployment process.</p>
Full article ">Figure 5
<p>Mongo and WebGME pods CPU consumptions.</p>
Full article ">Figure 6
<p>Sysdig monitoring dashboard.</p>
Full article ">
15 pages, 3407 KiB  
Article
A Resource Utilization Prediction Model for Cloud Data Centers Using Evolutionary Algorithms and Machine Learning Techniques
by Sania Malik, Muhammad Tahir, Muhammad Sardaraz and Abdullah Alourani
Appl. Sci. 2022, 12(4), 2160; https://doi.org/10.3390/app12042160 - 18 Feb 2022
Cited by 41 | Viewed by 7498
Abstract
Cloud computing has revolutionized the modes of computing. With huge success and diverse benefits, the paradigm faces several challenges as well. Power consumption, dynamic resource scaling, and over- and under-provisioning issues are challenges for the cloud computing paradigm. The research has been carried [...] Read more.
Cloud computing has revolutionized the modes of computing. With huge success and diverse benefits, the paradigm faces several challenges as well. Power consumption, dynamic resource scaling, and over- and under-provisioning issues are challenges for the cloud computing paradigm. The research has been carried out in cloud computing for resource utilization prediction to overcome over- and under-provisioning issues. Over-provisioning of resources consumes more energy and leads to high costs. However, under-provisioning induces Service Level Agreement (SLA) violation and Quality of Service (QoS) degradation. Most of the existing mechanisms focus on single resource utilization prediction, such as memory, CPU, storage, network, or servers allocated to cloud applications but overlook the correlation among resources. This research focuses on multi-resource utilization prediction using Functional Link Neural Network (FLNN) with hybrid Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The proposed technique is evaluated on Google cluster traces data. Experimental results show that the proposed model yields better accuracy as compared to traditional techniques. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>Illustration of the flow of GA-PSO algorithm.</p>
Full article ">Figure 2
<p>Illustration of the flow of the hybrid model in network training.</p>
Full article ">Figure 3
<p>Architecture of the predictive systems.</p>
Full article ">Figure 4
<p>Univariate CPU utilization prediction for different models.</p>
Full article ">Figure 5
<p>Univariate memory utilization prediction for different models.</p>
Full article ">Figure 6
<p>Multiivariate CPU utilization prediction for different models.</p>
Full article ">Figure 7
<p>Multiivariate memory utilization prediction for different models.</p>
Full article ">Figure 8
<p>Percentage improvement gain of the proposed model over the other models on univariate input case.</p>
Full article ">Figure 9
<p>Percentage improvement gain of the proposed models over the other models on multiivariate input case.</p>
Full article ">
32 pages, 9760 KiB  
Article
Machine Learning Based on Resampling Approaches and Deep Reinforcement Learning for Credit Card Fraud Detection Systems
by Tran Khanh Dang, Thanh Cong Tran, Luc Minh Tuan and Mai Viet Tiep
Appl. Sci. 2021, 11(21), 10004; https://doi.org/10.3390/app112110004 - 26 Oct 2021
Cited by 26 | Viewed by 5518
Abstract
The problem of imbalanced datasets is a significant concern when creating reliable credit card fraud (CCF) detection systems. In this work, we study and evaluate recent advances in machine learning (ML) algorithms and deep reinforcement learning (DRL) used for CCF detection systems, including [...] Read more.
The problem of imbalanced datasets is a significant concern when creating reliable credit card fraud (CCF) detection systems. In this work, we study and evaluate recent advances in machine learning (ML) algorithms and deep reinforcement learning (DRL) used for CCF detection systems, including fraud and non-fraud labels. Based on two resampling approaches, SMOTE and ADASYN are used to resample the imbalanced CCF dataset. ML algorithms are, then, applied to this balanced dataset to establish CCF detection systems. Next, DRL is employed to create detection systems based on the imbalanced CCF dataset. The diverse classification metrics are indicated to thoroughly evaluate the performance of these ML and DRL models. Through empirical experiments, we identify the reliable degree of ML models based on two resampling approaches and DRL models for CCF detection. When SMOTE and ADASYN are used to resampling original CCF datasets before training/test split, the ML models show very high outcomes of above 99% accuracy. However, when these techniques are employed to resample for only the training CCF datasets, these ML models show lower results, particularly in terms of logistic regression with 1.81% precision and 3.55% F1 score for using ADASYN. Our work reveals the DRL model is ineffective and achieves low performance, with only 34.8% accuracy. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>Card fraud worldwide from 2010 to 2027 [<a href="#B1-applsci-11-10004" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>Fraud class histogram with the imbalanced dataset.</p>
Full article ">Figure 3
<p>Plot Amount value.</p>
Full article ">Figure 4
<p>SMOTE linear interpolation of a random choose minority sample (k = 4 neighbors).</p>
Full article ">Figure 5
<p>Machine learning algorithms.</p>
Full article ">Figure 6
<p>Sigmoid function.</p>
Full article ">Figure 7
<p>Random forest with two trees [<a href="#B34-applsci-11-10004" class="html-bibr">34</a>].</p>
Full article ">Figure 8
<p>Evolution of XGBoost Algorithm from Decision Trees [<a href="#B40-applsci-11-10004" class="html-bibr">40</a>].</p>
Full article ">Figure 9
<p>Structure of DNN.</p>
Full article ">Figure 10
<p>Classification evaluation indexes.</p>
Full article ">Figure 11
<p>ICMDP process.</p>
Full article ">Figure 12
<p>Fundamental classification performance measurements of ML algorithms with SMOTE based on resampling approach 1.</p>
Full article ">Figure 13
<p>Fundamental classification performance measurements of ML algorithms with ADASYN based on resampling approach 1.</p>
Full article ">Figure 14
<p>Combined classification performance measurements with SMOTE based on resampling approach 1.</p>
Full article ">Figure 15
<p>Combined classification performance measurements with ADASYN based on resampling approach 1.</p>
Full article ">Figure 16
<p>AUC measurement performance of machine learning algorithms with SMOTE based on resampling approach 1.</p>
Full article ">Figure 17
<p>AUC measurement performance of machine learning algorithms with ADASYN based on resampling approach 1.</p>
Full article ">Figure 18
<p>Fundamental classification performance measurements of ML algorithms with SMOTE based on resampling approach 2.</p>
Full article ">Figure 19
<p>Fundamental classification performance measurements of ML algorithms with ADASYN based on resampling approach 2.</p>
Full article ">Figure 20
<p>Combined classification performance measurements with SMOTE based on resampling approach 2.</p>
Full article ">Figure 21
<p>Combined classification performance measurements with ADASYN based on resampling approach 2.</p>
Full article ">Figure 22
<p>AUC measurement performance of machine learning algorithms with SMOTE based on resampling approach 2.</p>
Full article ">Figure 23
<p>AUC measurement performance of machine learning algorithms with ADASYN based on resampling approach 2.</p>
Full article ">
17 pages, 529 KiB  
Article
Energy-Efficient Load Balancing Algorithm for Workflow Scheduling in Cloud Data Centers Using Queuing and Thresholds
by Nimra Malik, Muhammad Sardaraz, Muhammad Tahir, Babar Shah, Gohar Ali and Fernando Moreira
Appl. Sci. 2021, 11(13), 5849; https://doi.org/10.3390/app11135849 - 23 Jun 2021
Cited by 30 | Viewed by 4423
Abstract
Cloud computing is a rapidly growing technology that has been implemented in various fields in recent years, such as business, research, industry, and computing. Cloud computing provides different services over the internet, thus eliminating the need for personalized hardware and other resources. Cloud [...] Read more.
Cloud computing is a rapidly growing technology that has been implemented in various fields in recent years, such as business, research, industry, and computing. Cloud computing provides different services over the internet, thus eliminating the need for personalized hardware and other resources. Cloud computing environments face some challenges in terms of resource utilization, energy efficiency, heterogeneous resources, etc. Tasks scheduling and virtual machines (VMs) are used as consolidation techniques in order to tackle these issues. Tasks scheduling has been extensively studied in the literature. The problem has been studied with different parameters and objectives. In this article, we address the problem of energy consumption and efficient resource utilization in virtualized cloud data centers. The proposed algorithm is based on task classification and thresholds for efficient scheduling and better resource utilization. In the first phase, workflow tasks are pre-processed to avoid bottlenecks by placing tasks with more dependencies and long execution times in separate queues. In the next step, tasks are classified based on the intensities of the required resources. Finally, Particle Swarm Optimization (PSO) is used to select the best schedules. Experiments were performed to validate the proposed technique. Comparative results obtained on benchmark datasets are presented. The results show the effectiveness of the proposed algorithm over that of the other algorithms to which it was compared in terms of energy consumption, makespan, and load balancing. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>Structures of the workflows used for the experiments: (<b>a</b>) Sipht, (<b>b</b>) CyberShake, (<b>c</b>) Epigenomics, (<b>d</b>) LIGO, and (<b>e</b>) Montage.</p>
Full article ">Figure 2
<p>Percent improvement gain of the proposed algorithm over the other methods on the Montage dataset.</p>
Full article ">Figure 3
<p>Percent improvement gain of the proposed algorithm over the other methods on the Sipht dataset.</p>
Full article ">Figure 4
<p>Percent improvement gain of the proposed algorithm over the other methods on the LIGO dataset.</p>
Full article ">Figure 5
<p>Percent improvement gain of the proposed algorithm over the other methods on the Cybershake dataset.</p>
Full article ">Figure 6
<p>Percent improvement gain of the proposed algorithm over the other methods on the Epigenomics dataset.</p>
Full article ">
20 pages, 1227 KiB  
Article
Brainware Computing: Concepts, Scopes and Challenges
by Eui-Nam Huh and Md Imtiaz Hossain
Appl. Sci. 2021, 11(11), 5303; https://doi.org/10.3390/app11115303 - 7 Jun 2021
Cited by 9 | Viewed by 3711
Abstract
Over the decades, robotics technology has acquired sufficient advancement through the progression of 5G Internet, Artificial Intelligence (AI), Internet of Things (IoT), Cloud, and Edge Computing. Though nowadays, Cobot and Service Oriented Architecture (SOA) supported robots with edge computing paradigms have achieved remarkable [...] Read more.
Over the decades, robotics technology has acquired sufficient advancement through the progression of 5G Internet, Artificial Intelligence (AI), Internet of Things (IoT), Cloud, and Edge Computing. Though nowadays, Cobot and Service Oriented Architecture (SOA) supported robots with edge computing paradigms have achieved remarkable performances in diverse applications, the existing SOA robotics technology fails to develop a multi-domain expert with high performing robots and demands improvement to Service-Oriented Brain, SOB (including AI model, driving service application and metadata) enabling robot for deploying brain and a new computing model with more scalability and flexibility. In this paper, instead of focusing on SOA and Robot as a Service (RaaS) model, we propose a novel computing architecture, addressed as Brainware Computing, for driving multiple domain-specific brains one-at-a-time in a single hardware robot according to the service, addressed as Brain as a Service (BaaS). In Brainware Computing, each robot can install and remove the virtual machine, which contains SOB and operating applications from the nearest edge cloud. Secondly, we provide an extensive explanation of the scope and possibilities of Brainware Computing. Finally, we demonstrate several challenges and opportunities and then concluded with future research directions in the field of Brainware Computing. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>Proposed Brainware Platform.</p>
Full article ">Figure 2
<p>A single operational flow between edge devices and edge cloud with architecture.</p>
Full article ">Figure 3
<p>Service-oriented brain searching, learning updates and deployment. (Different colors represent different services).</p>
Full article ">Figure 4
<p>Semantic environment understanding.</p>
Full article ">Figure 5
<p>Service aware knowledge updating in the edge cloud.</p>
Full article ">Figure 6
<p>Encoder–decoder example for transferring the encoded features of images instead of actual data.</p>
Full article ">
18 pages, 1376 KiB  
Article
HP-SFC: Hybrid Protection Mechanism Using Source Routing for Service Function Chaining
by Syed M. Raza, Haekwon Jeong, Moonseong Kim and Hyunseung Choo
Appl. Sci. 2021, 11(11), 5245; https://doi.org/10.3390/app11115245 - 4 Jun 2021
Cited by 2 | Viewed by 2402
Abstract
Service Function Chaining (SFC) is an emerging paradigm aiming to provide flexible service deployment, lifecycle management, and scaling in a micro-service architecture. SFC is defined as a logically connected list of ordered Service Functions (SFs) that require high availability to maintain user experience. [...] Read more.
Service Function Chaining (SFC) is an emerging paradigm aiming to provide flexible service deployment, lifecycle management, and scaling in a micro-service architecture. SFC is defined as a logically connected list of ordered Service Functions (SFs) that require high availability to maintain user experience. The SFC protection mechanism is one way to ensure high availability, and it is achieved by proactively deploying backup SFs and installing backup paths in the network. Recent studies focused on ensuring the availability of backup SFs, but overlooked SFC unavailability due to network failures. This paper extends our previous work to propose a Hybrid Protection mechanism for SFC (HP-SFC) that divides SFC into segments and combines the merits of local and global failure recovery approaches to define an installation policy for backup paths. A novel labeling technique labels SFs instead of SFC, and they are stacked as per the order of SFs in a particular SFC before being inserted into a packet header for traffic steering through segment routing. The emulation results showed that HP-SFC recovered SFC from failure within 20–25 ms depending on the topology and reduced backup paths’ flow entries by at least 8.9% and 64.5% at most. Moreover, the results confirmed that the segmentation approach made HP-SFC less susceptible to changes in network topology than other protection schemes. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>Local and global recovery for link failure (<b>a</b>) in the path between consecutive SFFs and (<b>b</b>) between the SFF and SF.</p>
Full article ">Figure 2
<p>HP-SFC overlay and underlay networks’ architecture and system model.</p>
Full article ">Figure 3
<p>Traffic detouring example for SFC in the proposed hybrid protection mechanism and the flow tables’ configurations for the primary path, backup path, and traffic detouring in the case of failure.</p>
Full article ">Figure 4
<p>Emulated network topologies in Mininet for the performance evaluation of SFC protection mechanisms. (<b>a</b>) Emulated three-layer fat-tree data center topology. (<b>b</b>) Emulated enterprise network topology based on the AT&amp;T IP backbone network [<a href="#B24-applsci-11-05245" class="html-bibr">24</a>].</p>
Full article ">Figure 5
<p>Network traffic recovery delay for single SF chain incurred by HP-SFC in the data center and enterprise topologies. (<b>a</b>) Throughput at the primary SF (<math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </semantics></math>) and backup SF (<math display="inline"><semantics> <mrow> <msubsup> <mi>f</mi> <mn>1</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>) in the data center topology. (<b>b</b>) Throughput at the primary SF (<math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </semantics></math>) and backup SF (<math display="inline"><semantics> <mrow> <msubsup> <mi>f</mi> <mn>1</mn> <mo>′</mo> </msubsup> </mrow> </semantics></math>) in the enterprise topology.</p>
Full article ">Figure 6
<p>Flow table resource utilization comparison of local recovery, global recovery, SSP, and HP-SFC in the data center and enterprise topologies. (<b>a</b>) Flow table resources utilized in the data center topology by backup paths. (<b>b</b>) Flow table resources utilized in the enterprise topology by backup paths.</p>
Full article ">Figure 7
<p>Data center topology, average RTT increment of SFCs 1, 2, and 9 for local recovery, global recovery, and HP-SFC protection mechanisms. (<b>a</b>) RTT increment for SFF-SFF link failure. (<b>b</b>) RTT increment for SF-SFF link failure.</p>
Full article ">Figure 8
<p>Enterprise topology, average RTT increment of SFCs 1, 3, and 8 for local recovery, global recovery, and HP-SFC protection mechanisms. (<b>a</b>) RTT increment for SFF-SFF link failure. (<b>b</b>) RTT increment for SF-SFF link failure.</p>
Full article ">
16 pages, 3455 KiB  
Article
AAAA: SSO and MFA Implementation in Multi-Cloud to Mitigate Rising Threats and Concerns Related to User Metadata
by Muhammad Iftikhar Hussain, Jingsha He, Nafei Zhu, Fahad Sabah, Zulfiqar Ali Zardari, Saqib Hussain and Fahad Razque
Appl. Sci. 2021, 11(7), 3012; https://doi.org/10.3390/app11073012 - 27 Mar 2021
Cited by 7 | Viewed by 4576
Abstract
In the modern digital era, everyone is partially or fully integrated with cloud computing to access numerous cloud models, services, and applications. Multi-cloud is a blend of a well-known cloud model under a single umbrella to accomplish all the distinct nature and realm [...] Read more.
In the modern digital era, everyone is partially or fully integrated with cloud computing to access numerous cloud models, services, and applications. Multi-cloud is a blend of a well-known cloud model under a single umbrella to accomplish all the distinct nature and realm requirements under one service level agreement (SLA). In current era of cloud paradigm as the flood of services, applications, and data access rise over the Internet, the lack of confidentiality of the end user’s credentials is rising to an alarming level. Users typically need to authenticate multiple times to get authority and access the desired services or applications. In this research, we have proposed a completely secure scheme to mitigate multiple authentications usually required from a particular user. In the proposed model, a federated trust is created between two different domains: consumer and provider. All traffic coming towards the service provider is further divided into three phases based on the concerned user’s data risks. Single sign-on (SSO) and multifactor authentication (MFA) are deployed to get authentication, authorization, accountability, and availability (AAAA) to ensure the security and confidentiality of the end user’s credentials. The proposed solution exploits the finding that MFA achieves a better AAAA pattern as compared to SSO. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>Risk level assessment and proposed mitigation techniques.</p>
Full article ">Figure 2
<p>Medium risk mitigation workflow.</p>
Full article ">Figure 3
<p>SSO implementation with AWS and Shibboleth.</p>
Full article ">Figure 4
<p>MFA implementation with azure and cisco. NPS, network policy server.</p>
Full article ">Figure 5
<p>Authentication, authorization, accountability, and availability (AAAA) comparison of SSO and MFA implementation.</p>
Full article ">
18 pages, 5569 KiB  
Article
Providing Predictable Quality of Service in a Cloud-Based Web System
by Krzysztof Zatwarnicki
Appl. Sci. 2021, 11(7), 2896; https://doi.org/10.3390/app11072896 - 24 Mar 2021
Cited by 6 | Viewed by 2074
Abstract
Cloud-computing web systems and services revolutionized the web. Nowadays, they are the most important part of the Internet. Cloud-computing systems provide the opportunity for businesses to undergo digital transformation in order to improve efficiency and reduce costs. The sudden shutdown of schools and [...] Read more.
Cloud-computing web systems and services revolutionized the web. Nowadays, they are the most important part of the Internet. Cloud-computing systems provide the opportunity for businesses to undergo digital transformation in order to improve efficiency and reduce costs. The sudden shutdown of schools and offices during the pandemic of Covid 19 significantly increased the demand for cloud solutions. Load balancing and sharing mechanisms are implemented in order to reduce the costs and increase the quality of web service. The usage of those methods with adaptive intelligent algorithms can deliver the highest and a predictable quality of service. In this article, a new HTTP request-distribution method in a two-layer architecture of a cluster-based web system is presented. This method allows for the provision of efficient processing and predictable quality by servicing requests in adopted time constraints. The proposed decision algorithms utilize fuzzy-neural models allowing service times to be estimated. This article provides a description of this new solution. It also contains the results of experiments in which the proposed method is compared with other intelligent approaches such as Fuzzy-Neural Request Distribution, and distribution methods often used in production systems. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>Cloud-base web systems: (<b>a</b>) centralized one-layer architecture and (<b>b</b>) distributed two-layer architecture.</p>
Full article ">Figure 2
<p>Web Cloud Earliest Deadline First (WCEDF) web switch design.</p>
Full article ">Figure 3
<p>Neuro-fuzzy model: (<b>a</b>) overall view, (<b>b</b>) input fuzzy set functions, and (<b>c</b>) output fuzzy sets functions.</p>
Full article ">Figure 4
<p>Simulation model.</p>
Full article ">Figure 5
<p>Satisfaction function.</p>
Full article ">Figure 6
<p>Results of experiments for <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> s. (<b>a</b>) Mean service time in load function (number of clients), (<b>b</b>) the 95 percentile of service time in load function, (<b>c</b>) the 98 percentile of service time in load function, and (<b>d</b>) the cumulative distribution of service time for a number of clients equal to 2300.</p>
Full article ">Figure 6 Cont.
<p>Results of experiments for <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> s. (<b>a</b>) Mean service time in load function (number of clients), (<b>b</b>) the 95 percentile of service time in load function, (<b>c</b>) the 98 percentile of service time in load function, and (<b>d</b>) the cumulative distribution of service time for a number of clients equal to 2300.</p>
Full article ">Figure 7
<p>Satisfaction in load function for different <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> times. (<b>a</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>s</mi> </msubsup> <mo>=</mo> <mn>0.5</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> <mo>,</mo> <mo> </mo> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>h</mi> </msubsup> <mo>=</mo> <mn>1</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>s</mi> </msubsup> <mo>=</mo> <mn>0.75</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> <mo>,</mo> <mo> </mo> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>h</mi> </msubsup> <mo>=</mo> <mn>1.5</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>s</mi> </msubsup> <mo>=</mo> <mn>1</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> <mo>,</mo> <mo> </mo> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>h</mi> </msubsup> <mo>=</mo> <mn>2</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </mrow> </semantics></math>; and (<b>d</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>s</mi> </msubsup> <mo>=</mo> <mn>2</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> <mo>,</mo> <mo> </mo> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>h</mi> </msubsup> <mo>=</mo> <mn>4</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 7 Cont.
<p>Satisfaction in load function for different <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math> times. (<b>a</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>s</mi> </msubsup> <mo>=</mo> <mn>0.5</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> <mo>,</mo> <mo> </mo> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>h</mi> </msubsup> <mo>=</mo> <mn>1</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>s</mi> </msubsup> <mo>=</mo> <mn>0.75</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> <mo>,</mo> <mo> </mo> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>h</mi> </msubsup> <mo>=</mo> <mn>1.5</mn> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>s</mi> </msubsup> <mo>=</mo> <mn>1</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> <mo>,</mo> <mo> </mo> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>h</mi> </msubsup> <mo>=</mo> <mn>2</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </mrow> </semantics></math>; and (<b>d</b>) <math display="inline"><semantics> <mrow> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>s</mi> </msubsup> <mo>=</mo> <mn>2</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> <mo>,</mo> <mo> </mo> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>h</mi> </msubsup> <mo>=</mo> <mn>4</mn> <mrow> <mo> </mo> <mi mathvariant="normal">s</mi> </mrow> </mrow> </semantics></math>.</p>
Full article ">
17 pages, 4558 KiB  
Article
A Cloud-Based UTOPIA Smart Video Surveillance System for Smart Cities
by Chel-Sang Yoon, Hae-Sun Jung, Jong-Won Park, Hak-Geun Lee, Chang-Ho Yun and Yong Woo Lee
Appl. Sci. 2020, 10(18), 6572; https://doi.org/10.3390/app10186572 - 20 Sep 2020
Cited by 11 | Viewed by 3499
Abstract
A smart city is a future city that enables citizens to enjoy Information and Communication Technology (ICT) based smart services with any device, anytime, anywhere. It heavily utilizes Internet of Things. It includes many video cameras to provide various kinds of services for [...] Read more.
A smart city is a future city that enables citizens to enjoy Information and Communication Technology (ICT) based smart services with any device, anytime, anywhere. It heavily utilizes Internet of Things. It includes many video cameras to provide various kinds of services for smart cities. Video cameras continuously feed big video data to the smart city system, and smart cities need to process the big video data as fast as it can. This is a very challenging task because big computational power is required to shorten processing time. This paper introduces UTOPIA Smart Video Surveillance, which analyzes the big video images using MapReduce, for smart cities. We implemented the smart video surveillance in our middleware platform. This paper explains its mechanism, implementation, and operation and presents performance evaluation results to confirm that the system worked well and is scalable, efficient, reliable, and flexible. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>UTOPIA Smart Video Surveillance (USVS).</p>
Full article ">Figure 2
<p>The architecture of USVS.</p>
Full article ">Figure 3
<p>The operation of the USVS.</p>
Full article ">Figure 4
<p>The video data processing by the MapReduce Analyzer.</p>
Full article ">Figure 5
<p>UTOPIA’s Cloud computing.</p>
Full article ">Figure 6
<p>The architecture of the UTOPIA Cloud Computing Engine.</p>
Full article ">Figure 7
<p>The Cloud computing deployment for domestic Cloud computing.</p>
Full article ">Figure 8
<p>Flow chart of the object detection algorithm implemented for performance evaluation experiments.</p>
Full article ">Figure 9
<p>Configuration of the cluster system for Cloud computing in performance evaluation experiments.</p>
Full article ">Figure 10
<p>The total processing time when the number of frames allocated to each map task was increased.</p>
Full article ">Figure 11
<p>The total processing time when the number of workloads was increased: the number of frames allocated to each map task was fixed at 200.</p>
Full article ">
23 pages, 3055 KiB  
Article
Fuzzy Based Collaborative Task Offloading Scheme in the Densely Deployed Small-Cell Networks with Multi-Access Edge Computing
by Md Delowar Hossain, Tangina Sultana, VanDung Nguyen, Waqas ur Rahman, Tri D. T. Nguyen, Luan N. T. Huynh and Eui-Nam Huh
Appl. Sci. 2020, 10(9), 3115; https://doi.org/10.3390/app10093115 - 29 Apr 2020
Cited by 20 | Viewed by 3460
Abstract
Accelerating the development of the 5G network and Internet of Things (IoT) application, multi-access edge computing (MEC) in a small-cell network (SCN) is designed to provide computation-intensive and latency-sensitive applications through task offloading. However, without collaboration, the resources of a single MEC server [...] Read more.
Accelerating the development of the 5G network and Internet of Things (IoT) application, multi-access edge computing (MEC) in a small-cell network (SCN) is designed to provide computation-intensive and latency-sensitive applications through task offloading. However, without collaboration, the resources of a single MEC server are wasted or sometimes overloaded for different service requests and applications; therefore, it increases the user’s task failure rate and task duration. Meanwhile, the distinct MEC server has faced some challenges to determine where the offloaded task will be processed because the system can hardly predict the demand of end-users in advance. As a result, the quality-of-service (QoS) will be deteriorated because of service interruptions, long execution, and waiting time. To improve the QoS, we propose a novel Fuzzy logic-based collaborative task offloading (FCTO) scheme in MEC-enabled densely deployed small-cell networks. In FCTO, the delay sensitivity of the QoS is considered as the Fuzzy input parameter to make a decision where to offload the task is beneficial. The key is to share computation resources with each other and among MEC servers by using fuzzy-logic approach to select a target MEC server for task offloading. As a result, it can accommodate more computation workload in the MEC system and reduce reliance on the remote cloud. The simulation result of the proposed scheme show that our proposed system provides the best performances in all scenarios with different criteria compared with other baseline algorithms in terms of the average task failure rate, task completion time, and server utilization. Full article
(This article belongs to the Special Issue Cloud Computing Beyond)
Show Figures

Figure 1

Figure 1
<p>Limited capacity and overload problems.</p>
Full article ">Figure 2
<p>Collaborative task offloading model: (<b>a</b>) mobile device collaborate with SBS-MEC server; (<b>b</b>) SBS-MEC server collaborates with the remote cloud.</p>
Full article ">Figure 3
<p>Collaborative task offloading among SBS-MEC servers.</p>
Full article ">Figure 4
<p>Proposed collaborative architecture.</p>
Full article ">Figure 5
<p>Graphical representation of MFs: (<b>a</b>) triangular; (<b>b</b>) open left shoulder; (<b>c</b>) open right shoulder.</p>
Full article ">Figure 6
<p>Membership function of the input variables. For example, MFs for: (<b>a</b>) task size; (<b>b</b>) delay sensitivity; (<b>c</b>) local SBS-MEC VM utilization; (<b>d</b>) network delay; (<b>e</b>) neighboring SBS-MEC VM utilization.</p>
Full article ">Figure 7
<p>Output membership function for offloading decision: (<b>a</b>) output membership function; (<b>b</b>) COG calculation.</p>
Full article ">Figure 8
<p>Performance analysis: (<b>a</b>) average task completion time for the different task sizes; (<b>b</b>) the effect of different task sizes for server utilization.</p>
Full article ">Figure 9
<p>Performance analysis: (<b>a</b>) average task failure rate versus number of mobile devices; (<b>b</b>) average task completion time with respect to the different numbers of mobile devices.</p>
Full article ">Figure 10
<p>Performance analysis for different sizes of VMs and various number of mobile devices: (<b>a</b>) FCTO scheme; (<b>b</b>) WOTO scheme; (<b>c</b>) FCTO scheme (Effect of VMs for task failure rate); (<b>d</b>) FCTO scheme (Effect of VMs for server utilization).</p>
Full article ">
Back to TopTop