Abstract
Desktop clouds (DC) provide services in non-stationary environments that face reliability and performance threats not found in traditional clusters and datacenters. The idle resources available on computers can be claimed by users, turned off and faulted any time. For instance, platforms such as CernVM and UnaCloud harvest idle resources on computer labs to run virtual machines and support scientific applications. These platforms deal with interruptions and interferences caused by both users and applications. This non-stationarity is one of the main sources of issues in the design of reliable desktop cloud infrastructures that are capable of mitigating their own faults and errors. Based on a fault analysis that we have been carrying out and refining for a couple of years, we have found that reliability problems begin as the number of virtual machines that are going to be executed increases; these virtual machines must first be provisioned in the physical machines where they will be hosted. On the one hand, the main factors that can affect the provisioning of virtual machines in a DC are: the use of disk space, and the transmission of virtual images over the network. On the other hand, the applications and actions performed by users in the desktops may cause the virtual machine malfunction. In this paper, we propose an strategy based on known techniques applied to a particular environment: the scalable provisioning of virtual machines in desktop clouds. In addition, we describe the implementation and analyze its effectiveness.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Desktop clouds (DC) are opportunistic platforms based on virtualization that offer cloud computing services on common desktop computers [2]. They take advantage of idle resources in computers when their users perform regular activities. A DC manages these resources to execute virtual machines (VMs), with their operating systems and applications, without the users of these physical machines (PMs) perceiving a slowdown in the performance of the computer or feeling that their security is compromised. DC is a rugged platform in which resources (computing capacity, network and disk), are shared with the user of the physical machine.
DCs, such as CernVMFootnote 1 [13] and UnaCloudFootnote 2 [12], execute VMs on desktop computers located on university or business campuses where the aggregate capacity of idle resources is significant [7]. Typically, these DCs offer a subset of the infrastructure services provided by private and public cloud platforms based on dedicated infrastructure, such as OpenStackFootnote 3 and Amazon Web ServicesFootnote 4. Researchers can use DCs to execute scientific and academic tasks just like they use traditional cloud platforms. These tasks run on VMs on desktops, at the same time as the programs launched by the users of these computers [8].
DCs are more susceptible to failures than other cloud platforms because their infrastructure is not based on dedicated data centers nor on dedicated hardware. Considering our analysis of faults in a DC, presented in [9], we have found that the faults that affect the reliability of this class of systems occur mainly in two moments: in the provisioning, and in the execution of the VMs. On the one hand, provisioning of VMs has significant limitations in its scalability. The disk images used by the VMs, a.k.a. virtual images (VI), are large files whose transmission may take a while and is failure-prone. On the other hand, executing VMs in the presence of users in the same computers may affect their normal operation. Tasks executed by a DC can be interrupted by applications run by the users. As a result, the DC user may lose the work done so far.
Typically, a DC offers a best effort service without warranties on the execution of the tasks sent to the platform. The cloud users must check if the tasks were executed satisfactorily and, if necessary, start again their execution. The platform reliability is one of its most important aspects to improve in DCs.
This paper revisits our previous work [9] characterizing the faults that could occur in DC platforms. Here, we present a more comprehensive analysis that considers, not only the UnaCloud platform, but also other DC platforms such as BOINC, cuCloud and CernVM. We propose an improved mitigation strategy to overcome the detected failures. This paper describes a new approach for provisioning VMs by using pre-loaded templates of virtual images and customized images configured as multiattach disks. These techniques are well known and used in other contexts. Now, we are going to apply them in the provisioning of VMs in DCs. According to our preliminary evaluation, this strategy reduces the required transmission time and disk space, which allows us to provision and deploy multiple VMs for each host in a very short time and without failures.
The rest of this paper is organized as follows. Section 2, gives a background, describing how the DC platforms work. Section 3 includes related work regarding reliability on DC systems. With respect to our contributions, Sect. 4 talks about our revisited fault analysis, Sect. 5 introduces a new approach for scalable provisioning of VMs, and Sect. 6 presents the preliminary evaluation. Finally, Sect. 7, concludes the paper and discusses the future work.
2 Background About Desktop Cloud Systems
Desktop clouds take advantage of the idle capacity in a set of computers to provide Infrastructure as a Service (IaaS), a form of cloud computing. For DC users, the system offers infrastructure just like any other cloud platform. Behind the scenes, DCs run VMs on desktop computers, such as those found in university computer labs [4]. This section presents a background on the DCs and their operation.
DC is a computational paradigm that combines volunteer computing and cloud computing [2, 6, 12]. Its goal is to make shared resources available to users in order to provide cloud computing services without using dedicated resources. DCs use idle computing resources of the participant computers to provide services for processing, storage, networking, and applications using VMs running operating systems and their respective applications.
In contrast to traditional cloud platforms, DCs do not rely on specialized hardware or data centers. They use non-dedicated resources, typically heterogeneous, obtained from diverse computers such as those found in the computer labs and offices in a university. In addition, DCs typically do not offer solutions aimed to meet service-level agreements (SLA), nor do they offer advanced tools for monitoring or billing. Traditionally, a DC offers a best effort service that may run computing tasks at lower costs than other dedicated platforms [1].
Operation of a Desktop Cloud. Typically, a DC uses a client-server architecture: there is a DC server program in charge of receiving and processing requests from users and a DC client program running on each desktop computer. The DC server has (or builds) an inventory of PMs that can be used and a mechanism for allocating VMs on these machines. Basically, each PM has a computational capacity in use and an idle computational capacity that can be exploited. When a request for deploying VMs is received, the DC server determines which PMs can run them. The DC client software on each PM receives instructions from the DC server to copy the required files, create, configure and execute the requested VMs.
The functioning of a DC comprises two phases: (1) a conditioning phase and, (2) an operation phase. The conditioning phase groups four activities: (1.1) Preparing the virtual images. It consists in the creation of the virtual images, including the operating system, libraries and the applications properly configured and customized by the user for its execution. (1.2) Requesting the deployment of one or more VMs from a virtual image. (1.3) Scheduling the resource allocation. The system selects, by means of a location algorithm, the PMs that will be used for the execution of the VMs. As a result, multiple VMs can be assigned to the same PM. (1.4) Provisioning the VMs. The DC copies the virtual images in the PMs, creates the VMs and then configures them. During the operation phase, the DC platform (2.1) controls the VMs, e.g. starting, pausing or stopping a VM; and (2.2) monitors their execution.
There are many problems that may occur at provisioning the VMs. Provisioning implies tasks such as transmitting the virtual image to the desktop, creating the VM o VMs based on that virtual image, configuring the hypervisor to use that virtual image, starting the VM and copying data and installing additional software in that VM. Any of these tasks may fail. In a previous work [5] we noted a large number of requested machines that were not deployed by a DC. We are interested in analyzing the failures that may occur in that phase and provide means to detect, at runtime, the type of problem that is occurring and the proper strategy that must be used.
3 Related Work
Many authors have analyzed reliability problems that occur on DC platforms at provisioning VMs. This section presents some extensions and strategies that overcome these limitations.
-
Volunteer based DC platforms are systems where desktop users donate their idle computing resources. An agent in each computer, when detecting some idle capacity, requests a task to the platform and runs a VM to do it. These systems have inherent problems of volatility and availability because users on the desktops may claim idle resources and stop assigned tasks at anytime [10]. These systems typically replicate the same tasks for running on multiple desktops. If one desktop fails, the other desktops may report results.
-
Appliance based DC platforms are volunteer based platforms where desktop computers run optimized virtual images. Instead of using typical virtual images that must be transmitted each time, these platforms use custom images and specialized provisioning software to reduce transmission and improve starting time. Unlike other platforms that must transmit large virtual images before running VMs, appliance based systems such as CernVM [13] use the same small-sized virtual image for all the users and run any additional software using CernVM-FS, a set of remote read-only file-systems. These solutions reduce the space required in each desktop, but increase the use of the network and require a team responsible for maintaining and configuring the virtual image and the file-systems with the software.
-
Private-cloud based DC platforms extend existing cloud platforms to integrate physical computers when they are idle. Private-cloud based DCs must manage the volatility of these physical machines. Several works have extended the monitoring tools existing in the private-cloud platforms to support different strategies. cuCloud [11], for instance, predicts future availability and reliability based on historical information from volunteers to allocate the VMs considering its probability to fail.
-
Opportunistic DC platforms are systems designed to run VMs on desktops while the users on these computers do not notice it. UnaCloud, for instance, runs one or more VMs on the same PM at the lowest possible priority to minimize interference to applications started by the desktop users. UnaCloud has been used to run HPC and grid computing applications, and this platform has experienced problems related to the VMs provisioning. To overcome them, for instance, UnaCloud has been extended to transmit virtual images using peer-to-peer protocols [5]. They found that using protocols such as BitTorrent it is possible to reduce the time required to transmit files and the number of failures caused by transmission errors and timeouts.
4 A Revisited Fault Analysis for Desktop Cloud Platforms
Recently, we have updated our fault analysis [9] regarding the provisioning of VMs, considering not only UnaCloud but also other of DC platforms. We used the extended chain of threats to analyze faults, errors, failures and mitigation strategies. As a result, we identified two main types of errors: when (1) the DC cannot copy the virtual image to the desktop and when (2) DC cannot configure and start the VM.
\(\mathbf{E}_\mathbf{1}\): The DC can not copy the virtual image. It can fail due to communication errors, timeouts and insufficient disk space in the desktop. Network congestion and the large size of the files to transmit, i.e. the files for the virtual image and the software packages to install, are some of the causes. Figure 1 shows the extended chain of threats for the error E1. Mitigation strategies include:
-
M1: Using efficient transmission protocols, such as P2P file sharing implemented in UnaCloud [5].
-
M2: Using mechanisms to reduce the files to transmit, such as the use of small-sized virtual images used in CernVM [13].
-
M3: Using mechanisms to reduce the need for transmitting files, such as the caching of frequently used virtual images, used in CernVM [13].
In addition, we are proposing other two strategies distilled from some experiments performed on UnaCloud:
-
M4: Using efficient disk space management, such as the linked clone disks and the multiattach virtual disks available on hypervisors such as VirtualBoxFootnote 5, KVMFootnote 6 and VMwareFootnote 7. These techniques can be used to run multiple VMs sharing disks among them and, therefore, to optimize disk space on desktops.
-
M5: Using allocation methods that considers disk space, that prevent the systems to assign desktops without enough space.
\(\mathbf{E}_\mathbf{2}\): The DC cannot configure and start the VM. It can fail when the virtual image is incompatible with the hypervisor installed in the desktop or does not include some required software. In some DCs such as UnaCloud and BOINC, the configuration may fail if the virtual image does not satisfy some requirements or does not have configured some predefined user accounts. For instance, these DCs use special types of networking and require specific settings in the virtual image. If the virtual image does not satisfy these requirements, the VMs cannot be configured nor started. Figure 2 shows the extended chain of threats for the error E2. Mitigation strategies include:
-
M6: Using preconfigured templates of virtual images, already tested by DC administrators and staff, instead of arbitrary images customized by cloud users. For instance, this strategy is used by CernVM and cuCloud, platforms that offer catalogs of images that the users may select to create their VMs.
-
M7: Provision additional VMs than those needed to have backup VMs in case that some VMs can not be configured and started correctly. This strategy is used, for instance, by BOINC [3]. It assigns the same task to many nodes expecting that some, probably not all of them, may process the task and provide a response.
5 Implementing Strategies to Improve UnaCloud Reliability at Provisioning Virtual Machines
Based on our fault analysis, we have extended UnaCloud to implement three strategies to improve the reliability during the provisioning of VMs. Previously, we implemented the use of efficient transmission protocols [5]. Now, we are implementing: (1) using preconfigured virtual images, (2) using efficient disk space management by running VMs using multiattach disks, and (3) using mechanisms to reduce the files to transmit, by preloading base images in the desktops where the VMs will run. The implementation of these strategies is described below.
5.1 Using Preconfigured Templates of Virtual Images
We reviewed the diverse applications we are running on UnaCloud. Nowadays, our users create clusters of VMs to run MPI-based applications, especially GROMACS for computational chemistry and other HPC custom applications. Almost all the users run Debian or Ubuntu Linux Operating System, using some distribution of MPI. Instead of requiring users to create their own virtual image, we created a single virtual image that can be used by them.
We created a customized virtual image based on Ubuntu 16.04 by installing software such as NFS servers and clients, MPI libraries and some other utility programs. We defined some scripts that run at startup and that can be used to request data from servers or to install additional software when a VM starts.
5.2 Using Efficient Disk Space Management Mechanisms
Instead of having multiple copies of the same virtual image, one for each VM running on a desktop, we are using multiattach virtual disks. Using this writing mode, we define a single virtual disk that is shared across multiple VMs running at the same time. The content of the shared disk is not modified. Each VM creates a differential disk storing only its own changes. Typically, because our DC users only creates some configuration files and connect to different NFS remote disks to obtain their data, this results in relatively small files that do not consume large amounts of disk space on the desktops.
Note that this strategy, which we did not found in the other DCs, help to minimize problems related to consuming unnecessary disk space in the desktops.
5.3 Using Mechanisms to Reduce the Need for Transmitting Files
As a complement to the two previous strategies, we propose to copy, in advance, the files of the template virtual image in the desktops where the VMs will run. Considering that almost all the users can run MPI-applications using our template and we use the computers in the labs of the university. We copied the templates in these computers and modified the UnaCloud agents to check the existence of templates before requesting a copy.
We implemented this copying process as an on-demand task. We are considering a new extension where the most-used templates or the required templates for scheduled experiments are copied automatically at low-congestion times. UnaCloud may determine upfront the templates to be used in some labs and perform the copies at night or at times where the network has low usage rates.
6 Preliminary Evaluation
We have been working on improving the UnaCloud reliability at provisioning VMs. In the past we had many difficulties to achieve successful implementations of more than 20 machines. In 2017, after some improvements, we provisioned clusters of 100 nodes with 98% success [5]. Now, by applying the proposed strategies, we can deploy consistently, and without failures, fully successful deployments of up to 400 VMs.
6.1 Provisioning Large Clusters Using Our Approach
To analyze the time and errors provisioning clusters in UnaCloud, we conducted an experiment using up to 50 desktop computers and provisioned up to 200 VMs. We used a 3.51 GB Ubuntu Server 16.04 virtual image to deploy VMs with 1 GB RAM, 5 GB of virtual hard disk and 1 processing core. The VMs ran on desktops with an Intel Core i7-4770 processor, 20 GB of RAM and 500 GB of hard disk. We used a computer lab with 78 desktops. All of them connected to a 1 GB Ethernet network.
Table 1 shows the average provision time. Since when using the proposed strategies, it is not necessary to transmit files to the desktop, the provisioning time is the time used in creating the VM and making the necessary configuration so that it is ready for execution. It is important to note that in our experiments we created up to four VMs in the same PM, and 100% of the VMs were provisioned successfully, without failures during the process.
Table 1 includes the maximum provision time of the experiment. In (a), we see the time from 1 to 50 VMs using 1 PM to host 1 VM. In (b), (c) and (d), the ratios are 2 VMs in 1 PM, 3 VMs in 1 PM and 4 VMs in 1 PM. Using a 1VM/1PM ratio, we can note that provisioning from 1 to 50 VMs vary from 0.98 to 1.10 s.
Table 1 (b) shows that when changing the proportion of VMs/PMs, the times increase, because a VM is first created and connected with a preloaded disk in multiattach mode, and subsequently the following VMs are created one by one on the same host. When the ratio is 2 VMs on 1 host, the VMs can be provisioned between 9.45 and 11.38 s. It is remarkable that we can provision 100 VMs in 50 PMs in just 11.38 s, using a classic 1 Gbps Ethernet shared with the students’ regular browsing activities.
Table 1 (c) presents the provisioning time of 3 to 150 VMs in 1 to 50 PMs with a ratio of 3 VMs in each 1 host. The times obtained were from 18.55 to 21.75 s. In this experiment, the provisioning time for 150 VMs was only 21.75 s.
Finally, Table 1 (d) reports the provisioning times by using a ratio of 4 VMs in each PM, supplying between 4 and 200 VMs in times between 28.38 and 31.38 s.
6.2 Errors at Provisioning Virtual Machines
Regretfully, our previous monitoring systems reported failed deployments but did not identify the errors that caused the failures. We are implementing now a new monitoring system that identifies, with some level of confidence, the cause of the errors. However, we cannot compare the efficiency of our strategies. This section presents a discussion of the errors prevented by the three strategies implemented in UnaCloud and reported in this paper. The following are the faults described in the extended chain of threats in Sect. 4.
-
Network congestion and errors. The mentioned strategies reduce (or eliminate) the need of transmitting virtual images. Typical users can start VMs using a preloaded templates of a virtual images in the desktops. These deployments do not need to transfer any files.
-
Insufficient space on desktop’s hard disks. Considering that VMs use mutiattach disks, the space required in the desktops is reduced. For instance, according to the results obtained in our experiments, instead of requiring 3.51 GB for each VM running in a desktop, using the mutiattach disks requires 0.29 GB for each additional VM running in the same desktop.
-
The virtual image does not meet required specifications. Given that we provide a tested virtual image for running the VMs, we are assuring that it will meet all the requirements of the system. In our tests, we have been able to configure and start all the VMs using our predefined virtual image.
There are two faults that cannot be prevented by the strategies discussed in this paper: (1) when desktops are turned off or restarted and (2) when desktops are disconnected from the network at that same time that some virtual images are being copied or VMs are being configured. These faults are inevitable given the not-dedicated nature of the hardware used in DCs.
6.3 Discussion
The proposed strategies are easy to apply and the benefits can be obtained by carrying them out together.
The preload of a disk implies that it contains a virtual image. Although a normal disk can be preloaded, and thus prevent the network consumption that would be used in the transfer of the virtual image, this type of disk requires cloning mechanisms that consume time in the creation of VMs and inefficiently occupy the disk space.
We suggest that the same platform provide the virtual images in the form of a catalog with images ready to be used in the provisioning of VMs. Although it is a task that seems simple, it is necessary to have a team in charge of creating the images and implementing the changes when necessary. Modifications to the virtual image are a challenge due to the impact that the modifications can have on the VMs that have been created. Therefore, we understand that in the future it will be necessary to develop a version control system to deal with this circumstance.
By using multiattach writing mode disks preloaded with preconfigured virtual images, that can be connected to multiple VMs at runtime, we not only manage the space more efficiently, but we also prevent the transfer of voluminous files over the same network through which users access the Internet.
This, on the one hand, decreases the provisioning time and, on the other, significantly improves the performance of the network for users.
In addition, since after creating a VM, it is connected to the disk with the operating system and the applications installed and configured, the VM is quickly ready for execution. Therefore, the creation of one or more VMs in the same PM is a much faster process, compared to the equivalent process of creating VMs by cloning existing ones.
In addition, to have virtual images ready to use on disks in multiattach writing mode, our strategies enable the possibility of migrating VMs at run time. In this case it is sufficient to move the files of the differential disks to the PM in which a VM will run and it will quickly be running again.
Finally, to implement this strategy in Oracle VirtualBox, it was necessary to develop applications not available in the hypervisor. The created applications allow us to preload the disk in the PM and register it at the hypervisor, create a virtual machine from an image stored in a multiattach disk, and create an VM from another one that is connected to a disk multiattach, among other tools.
7 Conclusion and Future Work
In this paper, we present (1) a revisited reliability analysis for desktop cloud systems and (2) a UnaCloud extension that implements strategies to improve reliability at provisioning VMs.
On the one hand, we extended our analysis of faults experienced in UnaCloud to consider faults and mitigation strategies that occur in desktop cloud platforms such as BOINC, CernVM and cuCloud. Our analysis, based on extended chains of threats, includes information not only of failures, errors and faults, but also of the mitigation strategies that can help us face these faults. With respect to the analysis published a year ago, this time we have included new mitigation strategies and redefined others. For instance, using efficient disk space management, such as the linked-clones and the multiattach disks, is a new strategy, and educate the desktop cloud user in the creation of their virtual images was redefined.
On the other hand, we implemented the following three strategies in UnaCloud. We (1) defined a template of a virtual image that can be used by almost all the users, (2) used multiattach disks to efficiently manage the disk space in desktops, and (3) preloaded the virtual image in the desktops to reduce the need for transmitting files.
As future work, we are considering to use the information gathered in monitoring to improve decisions regarding VMs allocation and scheduling. We are also considering new analyses and experiments to validate the findings presented in this paper and to improve the strategies already implemented.
References
Alwabel, A., Walters, R., Wills, G.: Towards a volunteer cloud architecture. In: Tribastone, M., Gilmore, S. (eds.) EPEW 2012. LNCS, vol. 7587, pp. 248–251. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-36781-6_18
Alwabel, A., Walters, R.J., Wills, G.B.: A view at desktop clouds. In: International Workshop on Emerging Software as a Service and Analytics (ESaaSA 2014), pp. 55–61. ScitePress, Barcelona (2014)
Anderson, D.P.: Boinc: A system for public-resource computing and storage. In: Proceedings of the 5th IEEE/ACM International Workshop on Grid Computing, pp. 4–10. IEEE Computer Society (2004)
Anderson, D.P.: Volunteer computing: the ultimate cloud. ACM Crossroads 16(3), 7–10 (2010)
Chavarriaga, J., Forero-González, C., Padilla-Agudelo, J., Muñoz, A., Cáliz-Ospino, R., Castro, H.: Scaling the deployment of virtual machines in UnaCloud. In: Mocskos, E., Nesmachnow, S. (eds.) CARLA 2017. CCIS, vol. 796, pp. 399–413. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73353-1_28
Cunsolo, V.D., Distefano, S., Puliafito, A., Scarpa, M.: Volunteer computing and desktop cloud: the cloud@ home paradigm. In: Eighth IEEE International Symposium on Network Computing and Applications (NCA 2009), Cambridge, MA, USA, pp. 134–139. IEEE (2009)
Gómez, C.E., Díaz, C.O., Forero, C.A., Rosales, E., Castro, H.: Determining the real capacity of a desktop cloud. In: Osthoff, C., Navaux, P.O.A., Barrios Hernandez, C.J., Silva Dias, P.L. (eds.) CARLA 2015. CCIS, vol. 565, pp. 62–72. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-26928-3_5
Gómez, C.E., Chavarriaga, J., Bonilla, D.C., Castro, H.E.: Global snapshot file tracker. In: Florez, H., Diaz, C., Chavarriaga, J. (eds.) ICAI 2018. CCIS, vol. 942, pp. 90–104. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01535-0_7
Gómez, C.E., Chavarriaga, J., Castro, H.E.: Fault characterization and mitigation strategies in desktop cloud systems. In: Meneses, E., Castro, H., Barrios Hernández, C.J., Ramos-Pollan, R. (eds.) CARLA 2018. CCIS, vol. 979, pp. 322–335. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-16205-4_24
Marosi, A., Kovács, J., Kacsuk, P.: Towards a volunteer cloud system. Futur. Gener. Comput. Syst. 29(6), 1442–1451 (2013)
Mengistu, T.M., Alahmadi, A.M., Alsenani, Y., Albuali, A., Che, D.: cuCloud: volunteer computing as a service (VCaaS) system. In: Luo, M., Zhang, L.-J. (eds.) CLOUD 2018. LNCS, vol. 10967, pp. 251–264. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94295-7_17
Rosales, E., Castro, H., Villamizar, M.: UnaCloud: opportunistic cloud computing infrastructure as a service. In: Second International Conferences on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING 2011), pp. 187–194. ThinkMind (2011)
Segal, B., et al.: LHC cloud computing with CernVM. In: 13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2010), Jaipur, India, p. 004. PoS (2010)
Acknowledgments
We would like to thank David Camilo Bonilla Verdugo for all his collaboration running the experiments discussed in this paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Gómez, C.E., Chavarriaga, J., Tchernykh, A., Castro, H.E. (2020). Improving Reliability for Provisioning of Virtual Machines in Desktop Clouds. In: Schwardmann, U., et al. Euro-Par 2019: Parallel Processing Workshops. Euro-Par 2019. Lecture Notes in Computer Science(), vol 11997. Springer, Cham. https://doi.org/10.1007/978-3-030-48340-1_51
Download citation
DOI: https://doi.org/10.1007/978-3-030-48340-1_51
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-48339-5
Online ISBN: 978-3-030-48340-1
eBook Packages: Computer ScienceComputer Science (R0)