[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Advances, Systems and Applications

RETRACTED ARTICLE: Improving cloud efficiency through optimized resource allocation technique for load balancing using LSTM machine learning algorithm

This article was retracted on 06 December 2023

This article has been updated

Abstract

Allocating resources is crucial in large-scale distributed computing, as networks of computers tackle difficult optimization problems. Within the scope of this discussion, the objective of resource allocation is to achieve maximum overall computing efficiency or throughput. Cloud computing is not the same as grid computing, which is a version of distributed computing in which physically separate clusters are networked and made accessible to the public. Because of the wide variety of application workloads, allocating multiple virtualized information and communication technology resources within a cloud computing paradigm can be a problematic challenge. This research focused on the implementation of an application of the LSTM algorithm which provided an intuitive dynamic resource allocation system that analyses the heuristics application resource utilization to ascertain the best extra resource to provide for that application. The software solution was simulated in near real-time, and the resources allocated by the trained LSTM model. There was a discussion on the benefits of integrating these with dynamic routing algorithms, designed specifically for cloud data centre traffic. Both Long-Short Term Memory and Monte Carlo Tree Search have been investigated, and their various efficiencies have been compared with one another. Consistent traffic patterns throughout the simulation were shown to improve MCTS performance. A situation like this is usually impossible to put into practice due to the rapidity with which traffic patterns can shift. On the other hand, it was verified that by employing LSTM, this problem could be solved, and an acceptable SLA was achieved. The proposed model is compared with other load balancing techniques for the optimization of resource allocation. Based on the result, the proposed model shows the accuracy rate is enhanced by approximately 10–15% as compared with other models. The result of the proposed model reduces the error percent rate of the traffic load average request blocking probability by approximately 9.5–10.2% as compared to other different models. This means that the proposed technique improves network usage by taking less amount of time due, to memory, and central processing unit due to a good predictive approach compared to other models. In future research, we implement cloud data centre employing various heuristics and machine learning approaches for load balancing of energy cloud using firefly algorithms.

Introduction

Cloud services are extensively employed by both companies and individuals in the 21st and 22nd centuries due to their efficiency and dependability among others. One of the significant aspects of cloud data centres is to ensure that their management techniques produce energy reduction and reduction in the environmental impact. Therefore, it is critical to deploy new techniques or enhance existing ones to ensure that the resources required to reduce energy consumption are maximally allocated to balance the load in the deployment of leading-edge technology such as the Internet of things, and blockchain technology, among others. In large-scale distributed computing, where machines are networked to tackle difficult optimization problems, resource allocation is an extremely important aspect of the process. Within the scope of this discussion, the objective of resource allocation is to achieve maximum overall computing efficiency or throughput [20, 32]. Contrast this with grid computing, in which disparate clusters in different locations are interconnected and made available to users, and it becomes clear that cloud computing is a unique concept. Cloud computing has quickly become the de facto standard for network infrastructure in the IT industry. Increases in both the number of people using and paying for Internet-based services are factors in the meteoric ascent of cloud computing components (see Fig. 1). There is now no doubt that cloud computing is the most cost-effective IT breakthrough for business use. Because of this, small, medium, and failing businesses now have a fighting chance against larger enterprises by having access to computer hardware. It is a system that tries to evolve with very few or no limits because of its freedom of use, which is achieved through virtualization and software that is service-oriented.

The utilization of computer resources may now be carried out in one of three separate ways as a direct result of cloud technology. There is no need to worry about the stress and initial expense of procuring equipment, premises, and the IT supply chain. These strategies include, among other things, the ability to be flexible and pay for services on an as-needed basis. In the same way that we use water and gas daily, the cloud computing environment provides users with access to information technology resources. By connecting to a remote server located in a data centre, managed by a third party such as Microsoft Azure, Amazon Web Service (AWS), Facebook, or Google, customers can access a wide range of network, storage, computational, and software capabilities. Cloud innovation has gained a large amount of attention across the research, industry, academic, and commercial sectors due to its performance features (use flexibility, swift resource aggregation, network predominance, and so on), as well as its fast-growing percentage of IT expenditure.

According to the National Institute of Standards and Technology (NIST), “cloud computing” is “a framework that enables widespread, comfortable, on-demand internet backbone access to a consensual pool of resources that can be rapidly allocated and initiated with minimal coordination by the service provider” [24]. Allocating virtualized information and communication technology resources in a cloud computing paradigm is a difficult problem to solve because different application workloads (MapReduce, content distribution, and network web applications) exist with conflicting requirements for the allocation of information and communication technology resource capacity (response time, execution time, resource utilization, etc.). Due to limited available resources and growing customer demands, this job of resource allocation becomes increasingly difficult. As a result, several novel models and strategies have been developed to efficiently distribute resources. Some techniques have made use of dynamic resource allocation methods and models, which centre their attention on a variety of restrictions or objectives to optimize resource allocation.

Predicting network capacity based on real-time studies of traffic is a major obstacle to increasing cloud computing’s efficiency [1]. Cloud computing, being extremely dynamic and requiring high data rates, is often insufficiently served by wide area networks (WANs) that are based on optical transportation technologies. This is due to the network management plane’s primary focus on network resources while ignoring cloud resource availability. These disadvantages can be reduced by using Deep Learning (DL) and Machine Learning (ML) technologies to automate network self-configuration and fault management. Offline supervised techniques are used in most research findings on DL and ML for optical networks. According to its core assumption, the models are given training on past data before being deployed to real-world events. This limitation is often not relevant to wide area networks (WANs) because of how quickly traffic patterns may change [21, 27]. Innovative analytic strategies are required to successfully mine this massive volume of network data for information of relevance. It is speculated that the conceptual underpinnings of machine learning and deep learning might provide workable answers for the processing of network data.

Fig. 1
figure 1

Machine Learning-based Cloud Computing Conceptual Components [11]

Automated dynamic resource allocation is used to adjust the way cloud resources are used to better correspond with the optimization aim of the cloud service provider, which is to maximize the use of computing resources. This is accomplished by changing how cloud resources are used. This goal can be met by adjusting the utilization of cloud resources to the previous objective. This study employs an automated dynamic resource allocation system using a machine learning algorithm for the intuitive provision of cloud resources before demand. This approach analyses the heuristic data from resource utilization when certain applications are employed by customers and provides the optimal resource for that application with consideration to user configuration. This resource allocation provides the extra resource when need to keep resource utilization optimal, where unused cloud resources are freed up for reallocation.

The first novelty of our approach is the changing composition of computation clusters in the traffic patterns depending on the system parameters. Our approach in fact, did not consider small cell cloud as a pre-established entity, but was considered as a cluster that is dynamically built which is able to develop long-term reliance by sustaining continuous error flow via constant error carousels (CEC). In [16], the authors proposed various strategies of cluster information that could be adopted for a single user case using recurrent neural network as pre-built unit which may be difficult to train long temporal relationships because the gradient tends to inflate with time. Our strategy varies in its objectives. Our first strategy in changing composition of computation clusters in the traffic patterns is to minimize the experienced small cell cloud latency and to moderate traffic volumes for the LSTM to provide an adequate SLA. A second strategy in our approach is to reduce the power consumption trade-off and its costly small cells from the clusters in the US26 and Euro28 network. In our approach, these computation clusters provisioning and resources allocation are mutually and concurrently optimized for optimal performance of our approach.

The second novelty of our approach is the distributed computation abilities of the LSTM where each node builds its own load vector by gathering data of other nodes. By distributed approach, our model makes decisions locally using local load vectors. This can be applied for dynamic and adaptive system topology by considering the current state of the system during load balancing to identify system status changes; and, by changing their parameters dynamically. The distributed computational ability implemented in our approach improves the system efficiency by reducing task response time while keeping acceptable delays. For many existing approaches for load balancing and resource allocation in the cloud, small network cells with lower overall delays are selected for participating in the computation process by treating the tasks as first in first out (FIFO) manner. Treating tasks in this manner may not be the best scheduling practice especially in circumstances where tasks characteristics vary in latency constrictions and computation load. At contrary, in our proposed approach, many cloud cells can be included as much as needed both in the US26 and Euro28 network to compute the task. The novelty of our approach has two major contributions to knowledge. First, it has a customizable design where metrics (scalability, performance, response time, overhead associated), scheduling rules, and clustering objectives can be set according to individual applications and network requirements. Even though our approach used only US26 and Euro28 network, other network requirements can be implemented with this design. Secondly, our approach resides on reduced complexity for optimizing multi-parameters. This promises high perceived user’s quality and acceptable service level agreement (SLA).

In summary, this paper reviews different machine learning algorithms and optimization methods for resource allocation in the cloud by discussing how optimization techniques such as genetic algorithm (GA) can offer the uppermost performance in the field. Understanding these techniques is essential to enhance energy efficiency and performance analysis when determining the best load balancing technique. Also, we present how machine learning algorithms such as deep neural networks and support vector machines are applied to energy consumption prediction in the cloud environment. We present a framework for improving energy efficiency in the cloud through optimized resource allocation using the LSTM machine learning algorithm on two network traffic loads Euro28 and US26 respectively. Lastly, we present how multi-objective optimization methods using machine learning can efficiently allocate resources by balancing the load while focusing on dipping the amount of energy consumed as well as reducing violations in the service level agreement while improving the quality of service instantaneously.

Related work

Cloud computing

Cloud computing became ubiquitous not long after the launch of Amazon. Elastic Compute Cloud Product in 2006. This opened the door for other large service providers to embrace cloud computing and construct cloud system networks with increased resiliency. Computing in the cloud is an intriguing breakthrough because of its pay-as-you-go pricing model and its versatility. Cloud computing solutions, function by deploying a large central server across multiple geographical locations and then distributing resources from the servers based on demand. As more advanced tools have been made available, there has been a rise in demand for specific features of cloud computing. Industries and organizations are always on the lookout for a high-capacity network with readily available storage devices to enable the running of their businesses on inexpensive PCs. Because of the pervasive nature of business nowadays, there has been a meteoric rise in cloud computing use. Linux, for example, was widely used and made available for numerous platforms in 2019 thanks to cloud virtualization and custom architecture. Data centres provide the backbone for all these processes by hosting software programs with intensive processing needs. Even though cloud computing is gaining popularity in the information technology industry due to the many benefits it offers, there are still looming impediments to cloud innovation. Some of these impediments include governance, data compliance, security worries, uncertainty in energy efficiency, and adoption strategy difficulties. These challenges are areas of worry, and the endeavour is to discover workable remedies.

The term “cloud computing” (CC) refers to a paradigm that has recently become the most well-known and commonly used one in the fields of information technology and telecommunications (ICT). Cloud customers may not always perceive the value of cloud innovation, even though they support their everyday search service directly or indirectly through Internet activities. Because of its importance in the worlds of computers and engineering, cloud computing has become a popular term in communication. Cloud computing services enable growing and underprivileged nations to receive required services without limitation, facilitating rapid economic progress [29]. Before the cloud innovation period, establishing a traditional data centre by a company was a difficult process due to the cash requirements for both maintenance and the initial infrastructure investment. In contrast, we are now utilizing cloud services, in which a computer commodity can simply be rented based on need and the program may be deployed without stress. Many businesses (big and small) are attempting to balance their operating costs while also gaining access to superior efficiency tools (such as platforms, infrastructure, and proprietary software), optimizing CC innovation services becomes unavoidable due to the numerous benefits that align with business requirements. Users have simple, consistent, and scalable access to a shared pool of programmable network assets when the cloud computing performance approaches are based on a utility-based commercial model. This concept is the foundation for cloud computing. Customers can make use of the channels of their choosing, based on the specific needs they have, owing to the dependable and malleable process that is made possible by virtual machines hosted in the cloud.

Using Genetic Algorithms (GA) and lightweight simulators, Lee et al. [17] devised what they term Topology Aware Resource Allocation, a model that can predictably allocate resources in an IaaS environment (TARA). This model’s goal was to optimize the Map Reduce, and it recorded a 50% job completion time when benchmarked against the application-independent allocation. Toosi et al. [39] developed a Resource Allocation System (RAS) for a cloud service model that was based on the concept of infrastructure as a service (IaaS). This was done to improve the price and profitability of their client’s businesses. This RAS employs a proposed policy to enhance resource usage by sourcing resources from other service providers that are not in use. Xiao, Song, and Chen [41] adopted a different approach for the IaaS cloud service model, to optimize computation for better eco-friendly computation utilization. This is achieved via the introduction of a skewness algorithm, which measures the mismatch of resources in multi-dimensional resource utilization, integrating a set of heuristics into their system to prevent system overload.

Load balancing and resource allocation

Load balancing

The concept behind load balancing is to distribute the workload in an equitable manner across all the accessible information technology resources. Even if a service is down, the key purpose is to keep the service running by providing the procedures with acceptable resource utilization. Load balancing also focuses on lowering task delay and optimizing resource usage, resulting in cost-effective, improved system performance. It also offers versatility and flexibility for uses with varying dimensions that may change in the future, necessitating the use of more IT resources. Other goals include reducing energy use and carbon emissions, as well as avoiding congestion by supplying resources and meeting QoS standards [9, 13]. As a result, it demands an appropriate load planning mechanism that considers numerous measures.

Load balancing is a mechanism for dispersing a load of many users, over one or more connections, servers, terminals, or other IT resources [10]. This cloud-based technique differs from the traditional architecture of true load balancing. In the cloud industry, many academics across the world are researching and developing various types of optimum resource techniques. The approach employs run-time dispersion to properly balance IT resources and improve performance. In addition to load balancing, we have various additional concerns such as execution time, VM performance, energy savings, VM migration, carbon emissions, QoS and resource management, and so on [22, 42]. Aslam and Shah [4] researched heuristic-based approaches and employed a variety of load types to gain enhanced workflow in the cloud environment. These loads included network, CPU, memory, and others. In their 2017 study, Balaji and Saikiran considered a variety of different problems related to resource allocation and suggested a resource allocation technique that is effective for large task demands. Arunarani et al. [3] conducted a detailed investigation of various job scheduling strategies and determined measures suited for the cloud environment. Initially, their literature was centred on methodologies, parameters, and applications. A few researchers applied security [38] measures to various metrics used in the load balancing context.

Even though cloud computing has garnered a lot of attention, it still has several drawbacks, one of which is load balancing. Some of the challenges facing load balancing in cloud computing include:

  • Virtual Machines (VM) Migration: A whole machine may be perceived as a file or series of files using virtualization, and a VM can also be relocated between physical computers to relieve the burden on a heavily loaded actual machine. Spreading the workload uniformly throughout a data centre or cluster of data centres is the top priority. Is there a way to dynamically spread the load in cloud computing systems to prevent bottlenecks from occurring? This inquiry is pertinent to the process of moving virtual machines.

  • Service provisioning automation: The elasticity of cloud computing, which enables resources to be instantly assigned and released, is one of its most enticing features. What are the best ways to use or release cloud resources while maintaining conventional system performance and utilizing optimal resources?

  • Data storage management: Data stored over the network has expanded at an exponential pace over the last decade, and data storage management has become a critical problem for cloud computing, even for organizations that subcontract their data storage or for individuals. How can we migrate data to the cloud in such a way that it can be effectively stored while remaining easily accessible?

  • The development of micro data centres for computing in the cloud: Micro data centres may be less expensive, more energy efficient, and more useful than large data centres. Small businesses can offer computing in the cloud services, making geo-diversity computing possible. To provide enough reaction time with an efficient allocation of resources, load balancing will become an issue on a global scale.

  • Energy Management: Economies of scale are one of the benefits of cloud usage. Energy conservation is essential in a global economy because a limited number of global resources are supported by a limited number of companies rather than everyone having their own.

Through effective work scheduling and resource allocation approaches, several contemporary scheduling methods can keep load balance and provide improved results. It is vital to use resources efficiently to maximize revenues with optimum load balancing algorithms. An investigation into a few load balancing strategies or approaches used in cloud computing was offered by Ray and De Sarkar [35]. The purpose of the study was to first provide an examination of the execution of load-balancing algorithms that were based on qualitative components that had been defined for cloud simulation and then to make conclusions regarding these components. Aslam and Shah [4] gave an organized and complete survey of the research on cloud computing load balancing techniques. The research examined the most recent load balancing tools and strategies from 2004 to 2015. It aggregated current techniques aiming at delivering equitable load balancing. The authors’ classification gave a clear and succinct understanding of the underlying model used by each technique.

To prevent being locked at a local optimum, Mousavi et al. [26] presented a novel load balancing method that incorporates a teaching-learning based optimization algorithm (TLBO) and genetically weighted optimization (GWO )to balance the workload across all virtual machines while maximizing throughput (VMs). On 11 test functions, hybrid results were evaluated using particle swarm optimization (PSO), biogeography-based optimization (BBO), and genetically weighted optimization (GWO). A simulation of the hybrid algorithm was run to test the suggested load-balancing approach. The poor fiscal benefit of service providers is attributed to inefficient resource and power use. As a result, data centres could employ an efficient resource strategy of management. Because of this, Kumar, Singh, and Mohan [15] designed a novel load-balancing architecture to maximize the use of data centre resources while decreasing operating expenditures. For implementing the best allocation of VMs over onsite computers, the framework used a modified genetic algorithm. The test findings showed that the suggested framework outperformed current and three other common heuristics-based VM placement techniques by up to 45.21%, 84.49%, 119.93%, and 113.96% in terms of resource consumption. Self-directed workload forecasting (SDWF) is a technique suggested by Kumar, Singh, and Buyya [14] that uses the difference between actual and predicted workloads to better anticipate future workloads. The neural networks in the model are trained using an improved heuristic based on black hole occurrences. The proposed method was put through its paces with the use of six different real-world data streams. Accuracy was measured against a state-of-the-art model built with tools like deep learning, evolutionary algorithms, and backpropagation. This approach decreased the mean-square forecast error by 99.9% compared to the usual method. To evaluate the forecasting framework, Friedman and Wilcoxon signed-rank tests were used.

Task scheduling helps load balancing significantly, and task scheduling closely follows the standards of the Service Level Agreement (SLA), a contract provided to consumers by cloud developers. The LB algorithm considers significant SLA factors such as the Deadline. Considering the features of Quality of Service (QoS) tasks, VM priority, and resource allocation, Shafiq et al. [36] suggested a method targeted at optimizing resources and improving Load Balancing. Based on a literature review, the suggested LB solution solved the difficulties and the research gap. When compared to the present Dynamic LB algorithm, the proposed LB algorithm utilizes 78% of the permitted resources. It also performed admirably in terms of execution time. Khan et al. [11] offered a complete analysis of current research issues in machine learning-based resource management, existing ways to address these challenges, as well as their benefits and drawbacks. The report went on to suggest potential future research topics based on present research obstacles and limits.

Swarna et al. [37] recently conducted a study on load balancing of energy cloud using wind driven and firefly algorithms in internet of everything. Their research used energy efficiency cloud based on internet of everything composing of three components namely, Internet of Everything (IoE), cloud storage and data processing, and end-user services. Their research focused on integrating two diverse paradigms shift to develop an intelligent information processing technology to provide valuable services to the end users. This study optimized energy utilization by clustering the various internet of things network using Wind Driven Optimization Algorithm. In their approach, for each cluster, optimized cluster head (CH) was chosen using the Firefly Algorithm.

Li et al. [19] conducted a study on Computation Offloading in Edge Computing Based on Deep Reinforcement Learning to solve the edge computing problem of multiple subtasks. Their study proposed a Task Mapping Algorithm (TMA) based on deep learning reinforcement. Using a directed acyclic graph, the DAG task was transformed with the Graphic Sequence Algorithm to determine the offloading decision of all subtasks based on the sequence order. The Graph Sequence Algorithm chooses the higher priority task to execute earlier without violating the computing dependency. The result shows that the algorithm of the Task Mapping Algorithm based on deep learning reinforcement proposed in their study can achieve higher user comprehensive profit.

Resource allocation

The research by Naik and Kavitha Sooda [28] explored the purpose of the criteria that are considered while allocating resources, these are referred to as resource allocators and resource allocation algorithms. The cost of allocation, resource consumption, processing time, and reliability were all used to classify the criteria for the resource allocator in the study. A resource allocator structure was also provided, which considered the user’s request, the service level agreement, and the status of the resource. Also provided was an approach for constructing the resource allocator model.

Gomathi and Karthikeyan [7] proposed a hybrid swarm optimization approach for work assignments in the allocated context. The goal is to provide load balancing by minimizing the longest job completion time across processors. The two main components of this optimization strategy are task scheduling operations and using the particle swarm algorithm (PSA) to determine the most efficient allocation of resources across all tasks. Each aspect of this approach reflects the matching of tasks to requirements and criteria. allocating and managing resources in the cloud, there are some drawbacks identified in this research which include:

  • Performance and online profiling of workload: In cloud resource management research, the major elements of the workloads of major corporate providers are not satisfactorily resolved. They do not even consider the lifetime virtual resource use of VMs, for example. The vast majority of research has focused on online task profiling, which is impractical given that the performance evaluation may not be accessible until VMs are turned off.

  • Multiple Resource Usage in VM Consolidation: By consolidating virtual machines (VMs) onto fewer hosts, we may increase the number of VMs while reducing the number of hosts and energy needed to run them. Most of the research considered focuses on the amount of current CPU time being consumed by the host to evaluate whether it was overloaded. The consolidation process may become less effective because of unnecessary VM movement and host energy mode adjustments.

  • Cloud Network Traffic and temperature: The present VM allocation research includes a variety of strategies for verifying that each host is equipped to do the work before designating a single VM to it and various VM resources. Because the application demand fluctuates, having a variety of high and low resource use, from time - to – time, this method results in inefficient resource utilization. In today’s data centers for clouds, lowering the temperature of the host is a challenging operation. This is created by the heat that is emitted during the host’s energy consumption process. To maintain the temperature of the host below the threshold, cooling systems are used to remove this dissipated heat. This greater temperature has a direct influence on cooling system costs and has been considered a tough challenge for resource management systems to address.

  • Software-based energy metering: Current servers come equipped with several energy meters to keep track of how much power is being consumed, but these meters are unable to record the amount of power used by a virtual machine (VM). This is since measuring software’s energy usage effectively is difficult and expensive. Data center energy budgets indicate that the rising cost of running servers has made progress in the virtual machine (VM) compression phase more challenging.

Materials and methods

Utilization of Long Short-Term Memory (LSTM) machine learning algorithm for improving cloud efficiency through optimized resource allocation techniques for load balancing is essential in monitoring network traffic load. This section focused on using LSTM) algorithm to model the LSTMP unit’s input gate controls the control signal into the memory cell.

Fundamentals of the approach to long short-term memory (LSTM)

Hochreiter and Schmidhuber proposed using an LSTM-equipped recurrent neural network [16]. It may be difficult to train long temporal relationships in a regular recurrent neural network because the gradient tends to evaporate or inflate with time. LSTM, on the other hand, may develop long-term reliance by sustaining continuous error flow via ‘constant error carousels’ (CEC). Several changes have been made to the initial LSTM since then. An investigation into the way LSTM was utilized in Sak’s “predicted” form was carried out. LSTMP devices have input and output gates. The LSTMP unit’s input gate, controls the control signal into the memory cell, while the output gate controls data out. LSTMP’s forget gates allow adaptive forgetting and resetting of memory cells.

Each LSTMP unit has a recurrent and non-recurrent projection layer. Two projection layers are replaced with one equal layer. LSTM Neural Network is a version of the Recurrent Neural Network (RNN) that avoids the growing gradient problem. The neural network’s efficient backpropagation (learning) of the error correction is hampered by this gradient problem (new fact). As a result, it is unable to learn facts from large datasets, implying that the RNN has a short memory, which led to the development of the Long Short-Term Memory variant. The construction of the LSTM is shown to be like a chain (Fig. 2), along with a single memory cell. Each enormous square block in this picture is intended to stand in for a memory cell.

Fig. 2
figure 2

LSTM cell with four interacting layers [30]

The horizontal line that cuts across the top of the cell symbolizes the state of the cell, which is a crucial part of LSTM. Each cell that makes up the LSTM network’s “hinge” contributes to its production. The LSTM algorithm has the flexibility to either add to or remove from this cell’s state as needed. Another LSTM structure called gates does this operation. Gates (as shown in Fig. 2) and pointwise multiplication operations are produced by the sigmoid activation function. Three gates regulate how information about the status of the cell is passed, as indicated in the diagram above which are the forget, input, and output gates. Hochreiter and Schmidhuber discovered LSTM networks in 1997 [8]. Since then, there have been modifications made to the memory cell layout to conduct experiments in a variety of application fields. The following equations describe the computations in a normal single LSTM cell:

$$ft=\sigma \left(Wf.\left[ht-1\right]+bf\right)$$
(1)
$$it=\sigma (Wi.\left[ht-1\right]+bi )$$
(2)
$$\check{\text{C}}t=\text{tan}h (Wc.\left[ht-1\right]+bc)$$
(3)
$$Ct={ft}^{*} Ct -1 + {it}^{*}\check{\text{C}}t$$
(4)
$$ot= \sigma (Wo.\left[ht-1\right]+bo )$$
(5)
$$ht={ot}^{*}tanh\left(Ct\right)$$
(6)

where the activation functions that are being employed are the sigmoid function () and the hyperbolic tangent function (tan), it, 𝑓𝑡, 𝑜𝑡, 𝐶𝑡 and 𝐶𝑡̃ indicate the input gate, forget gate, output gate, memory cell content, and new memory cell content, respectively. The sigmoid function is made up of three gates, as was previously stated, and the hyperbolic tangent function is applied to increase the output of a cell.

Algorithms

Closest Data Centre

The easiest strategy was used first, to distribute traffic within the nearest data center using the Closest Data Center (CDC) method. Between the nearest DCs and the request source, k shortest candidate pathways were evaluated. A request is then allocated to assess if it is possible to assign it to a specific DC using the collection of candidate pathways. The RMSA technique was used to allocate requests in the optical layer by utilizing the returned path to DC as the starting point. Since this was not the case, the request was refused. Depending on the number of candidate pathways, the time intricacy of this approach was linear.

$$\left(\text{O}\right(\left|\text{P}\right|\left|\text{E}\right| \text{l}\text{o}\text{g} \left|\text{V} \right|\left)\right)$$
(7)

where V, denotes a set of vertices (nodes), E is a set of directed edges (fibre links) O(log d) is equal to the time complexity of this algorithm.

Monte Carlo Tree Search

Algorithm 1 describes the steps needed to implement DC request processing using Monte Carlo Tree Search. The single node in the tree that is at the very beginning of the MCTS is known as the root node. Up until a certain computational budget, β is consumed, the subsequent steps are then carried out. Simply said, β denotes the values of search tree layers that will be built.

First, a search tree is built, with the values for the current DC and network resource used at the root. For each (DC, candidate path) combination, the root has |R| x k children that can be used to fulfil the current DC request. Existing DC request distribution is used in Monte Carlo simulation runs to further the depth of the search tree up to β levels. It has been calculated that the ideal budget value (β) is equal to five using tuning simulations. To determine the value of a leaf node at a certain depth, the efficiency ratings of all the DCs and optical connections in the network are combined. After that, the pair of the DC and the prospective path that is corresponding to the child of the core that has the lowest consumption measure is chosen to fulfil the request (It is regarded as the most favoured child). |Aς| is the representation of the number of randomly selected children that should be considered for each search, and is the representation of the computational budget β. This yields the algorithm’s runtime as O (|Aς| x β). Aibin [1] for further information on MCTS and how cloud data centers may use it.

figure a

Algorithm 1: Monte Carlo Tree Search (MCTS)

Long-short term memory with forget gates

There is a common set of building blocks at the heart of all recurrent neural networks. Figure 3 represents the general structure of these modules and is rather straightforward, consisting of just a single hyperbolic function denoted by the symbol tanh. The structure of LSTM networks resembles a chain, but each module has four neural levels that communicate with one another (see Fig. 4).

figure b

Algorithm 2: Long-Short-Term Memory (LSTM)

Fig. 3
figure 3

Single recurrent neural network with a single repeating module. Source: Aibin, Michal. [1]

Fig. 4
figure 4

Long short-term memory network. Source: Aibin, Michal. [1]

Most importantly, LSTMs are characterized by a single storage cell that is represented by a horizontal line with x and + that travels over time t. The process of learning is sped up as a result. This memory cell’s contents can be altered by utilizing gate architectures in various ways. The first σ is known as the forget gate A, 0 or 1 from an activation unit determines whether the LSTM should entirely forget its prior state (Xt-1) or maintain it for further usage. In this case, the presence of an input gate with and tanh allowed the process to incorporate new information into the current state while preserving the existing activation structure. + was connected to this gate. The filtered data from the cell will then be produced using an activation unit. O (log d) is the time complexity for this method. Algorithm 2 displays the LSTM with forget gates’ pseudo-code that has been customized for the optimization issue. The main function of the LSTM is to compute new information by either remembering or forgetting the prior states. In this instance, if the traffic flow has altered (lines 2–8) is considered. The algorithm was initially trained to utilize data sets which were produced by several traffic sources to enable LSTM to categorize traffic patterns. In the next paragraphs, the procedure’s subparts will be outlined. If LSTM notices a shift in traffic patterns, it will use the present state of the network to determine the best DC and the most efficient route to it. All prior measures of usage will be thrown out during this process (lines 2–5). Throughout the simulation, the LSTM’s neural network is continually studying the traffic patterns. The information on how many regenerators are available in the network is what was sent as the output data to the next LSTM cell (lines 9–13). Spectrally efficient modulations were encouraged (8-, 16-, 32-, 64-QAM) if there are more than 50% of regenerators available; otherwise, QPSK or BPSK was chosen. The DC was returned and routed to determine if it is possible to allocate the request; otherwise, null.

Simulation setup

Both the Euro28 network (consisting of 28 nodes, 82 unidirectional linkages, 610 km of total link length, and 7 DCs) and the US26 network (consisting of 26 nodes, 84 unidirectional links, 754 km of total link length, and 10 DCs) were subjected to an investigation, and 100 regenerators were placed in each node of both networks. Utilizing the AWS website allowed for the discovery of the locations of both data centers and interconnection connections [2]. There are ten m3.2xlarge Amazon EC2 computers accessible in each data center location. In the first three months of 2019, AWS fees were the primary factor in the cost of DC infrastructure where the optical layer was manufactured with EON technology. Based on hypothetical requirements, the full 4 THz spectrum was sliced into 320 slices of 12.5 GHz. PDM-OFDM technology employing a wide variety of modulation schemes, including QPSK, BPSK, and x-QAM (where x is 8, 16, 32, or 64) was also developed because this setup combined EON and BV-Ts. Bit-rate constraints of 40 Gbps, 100 Gbps, and 400 Gbps were met by employing the three different BV-Ts. Three more networks that process data from other nations are now connected to each of the networks. Physical connection degradation (fibre attenuation, component insertion loss) and regeneration were explored. The traffic model, developed using a Cisco Visual Networking Index forecast for 2020, accounted for PaaC, SaaS, and SaaC requests [6]. In this paper, simulation in three (3) scenarios were considered:

  • one source of traffic (the Poisson distribution, because it is the one that is utilized most of the time [40];

  • a traffic trend that changes randomly, quickly, Poisson [25], and Constant Uniform [18].

  • a rapid change in the traffic trend, connection failures, and (same distributions as above).

The average arrival rate of λ was found to be between 3 and 7 requests per unit of time, with a confidence level of 95%. The requests’ lifetimes were exponentially distributed, with the mean value 1 = 1/γ where = 0.01%. Erlangs (ER) λ/γ, are a measurement that may be used to determine the volume of traffic. Their range is from 300 to 700. In the scenarios involving the Euro28 and the US26, there are a total of 500,000 requests. It should be noted that in the third scenario, the examination was only carried out on service restoration, not normal path protection or any other survivability mechanism. This option is taken to test the algorithms’ capacity to recover and reconfigure the network quickly. To continue handling the requests that were missed due to the connection loss, the queue is refilled. Due to the uncommon nature of optical node failure, the simulation only considered a single instance of a failed multi-link [34]. (up to three links dissolved at the same time). To replicate real-world situations, the recovery time is set to 50/γ.

Toolkits platforms and risk management

The tool employed for the technical development of this study was the deeplearning4j class library which contains the LSTM machine learning algorithm. This library only works with a 64bits Java Virtual Machine (JVM) version i.e. a system with a Java Development Kit (JDK) of 64 bits was installed. Its minimum requirement is JDK 7, which means systems with JDK versions lower than JDK 7 cannot run the Deeplearning4J library [5]. The Deeplearning4J contains machine learning algorithm dataset pre-processors and feature extractors. It facilitated the training and parameter configuration of the training phase, where the trained system was retrained till an efficient system was achieved, where the system was able to accurately allocate resources intuitively.

The risk strategy adopted for this study is Risk Avoidance, which requires the risk to be eliminated by taking actions that ensure the risk does not occur. For each resource item, the items were acquired in the early stages of this research, all items were tested and functional, which include the PC for development, articles for literature, and the Deep learning 4 J library. The datasets have been acquired and reviewed to provide the insight necessary for the trained LSTM machine learning algorithm to intuitively allocate resources based on application usage. To avoid the risk of the technical difficulty of developing the application for this study, relevant resources were acquired and reviewed to contain all the information required to develop an efficient application, while avoiding common bottlenecks in similar endeavours.

Experimental results

Scenario 1

Initial experiments focused on Case 1. The CDC algorithm fails to meet expectations, yielding over 10% BP for both the Euro28 and US26 networks (see Fig. 5). The acceptable Service Level Agreement (SLA) is typically specified by the industry at a maximum of 1%. The two top algorithms, MCTS and LSTM, are what we concentrate on next. Both algorithms produced the best outcomes for light traffic loads (less than 400 ER) (0%). At traffic loads between 400 and 450 ER, BP initially manifests itself. Despite this, the BP for LSTM and MCTS was considerably lower than the highest SLA. Around 600 ER, the first BP rise becomes apparent. It was the point at which the network’s resources begin to run out. The spectrum that was accessible was constrained and the number of regenerators is dwindling. Investigating the potential for more resources can help us find a solution. Finally, it was mentioned that MCTS performs marginally more efficiently than LSTM when network traffic trends are not changing quickly. This is because, when traffic patterns stay constant, MCTS can construct intricate search trees to forecast the optimum routing choices. Table 1 shows the service cost per hour in dollars for scenario #2. 

Table 1 Summary of related work
Fig. 5
figure 5

Scenario #1: The Euro28 (on top) and the US26 (bottom)

Scenario 2

Then, simulations for scenario #2 were run, in which the traffic pattern changed often (see Fig. 6). To make the graphics easier to read, we did not include the data provided by the CDC algorithm because they were subpar. The LSTM produces far better outcomes than MCTS, which is the primary distinction between the first two situations in terms of the performance of the algorithms. It was observed that MCTS experienced performance concerns when the traffic trend changed. The effectiveness of the algorithm is decreased since MCTS fails to immediately recognize the new style and instead generates the same predictions as before. Getting the maximum degree of accuracy takes time. For light to moderate traffic volumes, the LSTM provides an adequate SLA. In conclusion, a comprehensive look at the pattern reveals that the US26 network produces somewhat worse outcomes compared to the Euro28 network. One major difference between the network architectures of the US-26 and the Euro-28 is the reason for this. The nodes of US26 are spread out over both borders of the continent, but Euro28’s nodes are concentrated in a single area. Table 2 shows the service cost per hour in dollars for scenario #2.

Table 2 Service cost per hour (in Usd), scenario #1
Fig. 6
figure 6

Scenario #2: The US26 and the Euro28 (on top) (bottom)

Scenario 3

Simulating scenario #3 was the last part (see Fig. 7). LSTM was the ideal algorithm. As it reacts to new modifications more efficiently than MCTS or straightforward CDC, it enabled the speedy restoration of services. For light and moderate traffic volumes, the LSTM obtained a respectable SLA. The variances between MCTS and LSTM in error reduction are about 10–15%. A sequence of infinite data with indeterminate time may be processed and predicted with the LSTM algorithm. A key idea of MCTS is that LSTM outperforms Markov models because of their relative insensitivity to gap length. Table 3 shows the service cost per hour in dollars for scenario #3 (Table 4).

Table 3 Service cost per hour (in Usd), scenario #2
Fig. 7
figure 7

Scenario #3: Euro28 (on top) and US26 (bottom)

Table 4 Service cost per hour (in Usd), scenario #3

Comparison with recent state of the art

Focusing on the cost of service (CoS) (as shown in Tables 1, 2 and 3), MCTS and LSTM not only had superior BP performance but also had a reduced OPEX. The US26 network’s CoS is somewhat greater than the Euro28 networks. It supports the findings that CoS and BP are impacted by various network designs. Since computing the network output and using backpropagation is less expensive than using LSTM, MCTS gives marginally lower fees for huge traffic when their trends do not change. Additionally, the trends diverge in cases where the request pattern changes quickly. The LSTM thus emerges as the most affordable option. Early detection of changes in traffic patterns enables LSTM to “forget” prior information and begin utilizing new patterns to apply new rules. Because MCTS is continually creating search trees without considering the quick changes, it takes longer for it to get used to new traffic circumstances. Only under light traffic volumes did both algorithms provide comparable prices. Because of the low traffic loads, the poor routing choices have little effect on the CoS, as they don’t use many of the network’s resources. The LSTM outperforms competing algorithms considerably under growing traffic loads and increasingly unpredictable traffic trends. The final example illustrates the point quite well. Each poor choice is substantially more expensive since it necessitates rerouting the requests that were turned down because of the unavailability of resources and grounded network connections. Making practical judgments on resource reallocation and leasing requires an understanding of the basic performance indicators of load, allocated resources, and application evolution over time, we compared our results with the state of the art with the research of [23]. By addressing these concerns, it will be clear that it is difficult to understand the operation of any large computer system, including the cloud. The first is that computer operating systems-based cloud technologies do not provide real-time assurances. Second, and perhaps more crucially, a fundamental theory to guide as useful tools are built to forecast and regulate the performance of programs is required. This is a basic scenario for computer systems, but because cloud environments use an extra virtualization layer on top of which cloud apps run, it stands out even more.

Furthermore, it has been demonstrated via debate and analysis of state-of-the-art methodologies that no single strategy can entirely address all challenges that are related to load balancing. The researchers uncovered this fact. For instance, whereas some solutions completely disregard QoS, dependability, and scalability, others do. Additionally, while most of the studied mechanisms used simulation to assess the suggested processes, several others did not. To evaluate the implications that size might have on system performance in a large-scale setting, future research should either use real cloud systems or a simulator like CloudSim. According to the reviews of various studies, efforts to decentralize load balancing are now being made [12, 33]. In theory, it makes sense to see the resources available in a data centre as a unified whole. On the other hand, it might not be the best option in any kind of failure scenario that could influence the way the system works. Because of this, an adaptive load balancing technique would be the preferable choice. This method would allow resources to be managed independently inside clusters, and clusters would be generated dynamically based on the status of the application and the request that is now being processed. It is anticipated that adaptive load balancing would utilize a combination of centralized and distributed control techniques. This would enable the adjustment of the trade-off between dependable workflow and efficient use of resources. Based on the result, the proposed model shows the accuracy rate is enhanced by approximately 10–15% as compared with other models [31]. It means that the proposed technique improves network usage by taking less amount of time due to a good predictive approach compared to other models.

Conclusion

This research focused on the implementation of an application of the LSTM algorithm which provided an intuitive dynamic resource allocation system that analysed the heuristics application resource utilization to ascertain the best extra resource to provide for that application. The software solution simulated in near real-time the resource allocation by the trained LSTM model. Combining these with cloud data center dynamic routing approaches has benefits. Long-Short Term Memory and Monte Carlo Tree Search were compared. The data demonstrated that MCTS works efficiently when the traffic trend maintains stability throughout the simulation. Due to changing traffic patterns, this is often impractical. On the other hand, it was verified that by employing LSTM, this problem could be solved and an acceptable service level agreement (SLA) achieved. For future work, algorithm design and implementation in cloud data centers employing various heuristics and machine learning approaches are proposed. The need for a deeper examination of the optical and data center network resource requirements now and in the future; thus, establishing and implementing into practice algorithms for additional physical models that may be used in elastic optical networks using traffic prediction systems based on algorithms other than LSTM and Monte Carlo Tree Search, such as the Las Vegas algorithm.

While different performance metrics (such as response time, predictability, reliability, scalability, fault tolerance, associated overhead, throughput, and thrashing) that affect load balancing were employed in our approach to the system stability improvement by balancing the load across the available virtualised resources, our study did not calculate the energy consumption used by individual devices connected in the system at personal terminals (including the desktop, handset, and the laptop), the network nodes, and the application server used in our experiment. As a result, our approach could not determine the power-minimization in wired and wireless networks. Secondly, even though the experimental results of our system show that the LSTM can achieve load balancing and improve system performance. However, this cannot be generalised by using only two networks (US26 and Euro28). By implication, we cannot generalise the results of our approach until it is tested using other network data.

Availability of data and materials

The supporting data can be provided on request.

Change history

References

  1. Aibin M (2020) LSTM for Cloud Data Centers Resource Allocation in Software-Defined Optical Networks. In: 2020 11th IEEE Annual Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON). IEEE, New York, p 0162–0167

  2. Amazon Web Services (2016) Elastic Compute Cloud (EC2) Cloud Server & Hosting AWS. [Online] Available: https://aws.amazon.com/ec2. Accessed 20 Apr 2022

  3. Arunarani AR, Manjula D, Sugumaran V (2019) Task scheduling techniques in cloud computing: A literature survey. Future Generation Computer Systems 91:407–415

    Article  Google Scholar 

  4. Aslam S, Shah MA (2015) Load balancing algorithms in cloud computing: A survey of modern techniques. In: 2015 National software engineering conference (NSEC). IEEE, Rawalpindi, p 30–35

  5. Baeldung (2022) A Guide to DeepLearning4J. [Online] Available at: https://www.baeldung.com/deeplearning4j. Accessed 20 Apr 2022

  6. Cisco Systems (2016) Cisco Global Cloud Index: Forecast and Methodology. pp 1–41

    Google Scholar 

  7. Gomathi B, Karthikeyan K (2013) Task scheduling algorithm based on hybrid particle swarm optimization in cloud computing. Appl Inf Techno 55:33–38

    Google Scholar 

  8. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  9. Jawhar I, Mohamed N, Al-Jaroodi J, Agrawal DP, Zhang S (2017) Communication and networking of UAV-based systems: Classification and associated architectures. J Netw Comput Appl 84:93–108

    Article  Google Scholar 

  10. Katyal M, Mishra A (2014) A comparative study of load balancing algorithms in cloud computing environment. arXiv preprint arXiv:1403.6918

    Google Scholar 

  11. Khan T, Tian W, Zhou G, Ilager S, Gong M, Buyya R (2022) Machine Learning (ML)-Centric Resource Management in Cloud Computing: A Review and Future Directions. arXiv preprint arXiv:2105.05079.

  12. Khan T, Tian W, Zhou G, Ilager S, Gong M, Buyya R (2022) Machine learning (ML)–Centric resource management in cloud computing: A review and future directions. J Netw Comp Appl 204. https://doi.org/10.1016/j.jnca.2022.103405

  13. Kumar P, Kumar R (2019) Issues and challenges of load balancing techniques in cloud computing: A survey. ACM Comput Surv (CSUR) 51(6):1–35

    Article  Google Scholar 

  14. Kumar J, Singh AK, Buyya R (2021) Self-directed learning-based workload forecasting model for cloud resource management. Inf Sci 543:345–366

    Article  Google Scholar 

  15. Kumar J, Singh AK, Mohan A (2021) Resource-efficient load‐balancing framework for cloud data center networks. ETRI J 43(1):53–63

    Article  Google Scholar 

  16. Kvjoshi P (2017) Deep Learning for Sequential Data - Part V: Handling Long Term Temporal Dependencies.[Online] Available at: https://prateekvjoshi.com/2016/05/31/deeplearning-for-sequential-data-part-v-handling-long-term-temporaldependencies/. Accessed 21 Apr 2022

  17. Lee G, Tolia N, Ranganatha P, Katz RH (2010) August Topology-aware resource allocation for data-intensive workloads. Proceedings of the first ACM asia-pacific workshop on Workshop on systems. pp 1–6

    Google Scholar 

  18. Leitmann D (1976) On the uniform distribution of some sequences. J Lond Math Soc 2(3):430–432

    Article  MathSciNet  MATH  Google Scholar 

  19. Li MC, Mao N, Zheng X, Gadekallu TR (2022) Computation Offloading in Edge Computing Based on Deep Reinforcement Learning. Lect Notes Netw Syst 394:339–353. https://doi.org/10.1007/978-981-19-0604-6_28

    Article  Google Scholar 

  20. Liu Y, Njilla LL, Wang J, Song H (2019) An lstm enabled dynamic stackelberg game theoretic method for resource allocation in the cloud. In: 2019 International Conference on Computing, Networking and Communications (ICNC). IEEE, Honolulu, p 797–801

  21. Mata J, de Miguel I, Duran RJ, Merayo N, Singh SK, Jukan A, Chamania M (2018) Artificial intelligence (AI) methods in optical networks: A comprehensive survey. Opt Switch Netw 28:43–57

    Article  Google Scholar 

  22. Marinescu DC, Paya A, Morrison JP, Olariu S (2017) An approach for scaling cloud resource management. Cluster Comput 20(1):909–924

    Article  Google Scholar 

  23. Milani AS, Navimipour NJ (2016) Load balancing mechanisms and techniques in the cloud environments: Systematic literature review and future trends. J Netw Comput Appl 71:86–98

    Article  Google Scholar 

  24. Mell P, Grance T (2011) The NIST Definition of Cloud Computing, Special Publication (NIST SP), National Institute of Standards and Technology, Gaithersburg. [online] https://doi.org/10.6028/NIST.SP.800-145. Accessed 22 Nov 2022

  25. Mneimneh S (2003) Computer Networks: Modeling arrivals and service with Poisson process. Tech. Rep

  26. Mousavi S, Mosavi A, Varkonyi-Koczy AR (2018) A load balancing algorithm for resource allocation in cloud computing. In: Luca D, Sirghi L, Costin C (eds) Recent Advances in Technology Research and Education. INTER-ACADEMIA 2017. Advances in Intelligent Systems and Computing, vol 660. Springer, Cham, p 289–296. https://doi.org/10.1007/978-3-319-67459-9_36

  27. Musumeci F, Rottondi C, Nag A, Macaluso I, Zibar D, Ruffini M, Tornatore M (2018) An overview on application of machine learning techniques in optical networks. IEEE Commun Surv Tutorials 21(2):1383–1408

    Article  Google Scholar 

  28. Naik A, Kavitha Sooda K (2021) A study on Optimal Resource Allocation Policy in Cloud Environment. Turkish J Comput Math Educ (TURCOMAT) 12(14):5438–5446

    Google Scholar 

  29. Okonor O, Adda M, Gegov A, Sanders D, Haddad MJM, Tewkesbury G (2019) Intelligent approach to minimizing power consumption in a cloud-based system collecting sensor data and monitoring the status of powered wheelchairs. In: Bi Y, Bhatia R, Kapoor S (eds) Intelligent Systems and Applications. IntelliSys 2019. Advances in Intelligent Systems and Computing, vol 1037. Springer, Cham, p 694–710. https://doi.org/10.1007/978-3-030-29516-5_52

  30. Olah C (2017) Understanding LSTM Networks. [Online] Available at: http://colah.github.io/posts/2015-08-Understanding-LSTMs/. Accessed 21 Apr 2022

  31. Ouhame S, Hadi Y, Ullah A (2021) An efficient forecasting approach for resource utilization in cloud data center using CNN-LSTM model. Neural Comput Appl 33(16):10043–10055

    Article  Google Scholar 

  32. Qureshi MB, Dehnavi MM, Min-Allah N, Qureshi MS, Hussain H, Rentifis I, Tziritas N, Loukopoulos T, Khan SU, Xu CZ, Zomaya AY (2014) Survey on grid resource allocation mechanisms. J Grid Comput 12(2):399–441

    Article  Google Scholar 

  33. Rahimi AM, Ziaeddini A, Gonglee S (2022) A novel approach to efficient resource allocation in load-balanced cellular networks using hierarchical DRL. J Ambient Intell Humaniz Comput 13(5):2887–2901

    Article  Google Scholar 

  34. Rak J (2015) Resilient routing in communication networks, vol 118. Springer, Berlin

    Book  Google Scholar 

  35. Ray S, De Sarkar A (2012) Execution analysis of load balancing algorithms in cloud computing environment. Int J Cloud Computing: Serv Archit (IJCCSA) 2(5):1–13

    Google Scholar 

  36. Shafiq DA, Jhanjhi NZ, Abdullah A, Alzain MA (2021) A Load Balancing Algorithm for the Data Centres to Optimize Cloud Computing Applications. IEEE Access 9:41731–41744

    Article  Google Scholar 

  37. Swarna SP, Bhattacharya S, Maddikunta PKR, Somayaji SRK, Lakshmanna K, Kaluri R, Hussien A, Gadekallu TR (2020) Load balancing of energy cloud using wind driven and firefly algorithms in internet of everything. J Parallel Distrib Comput 142:16–26. https://doi.org/10.1016/j.jpdc.2020.02.010

    Article  Google Scholar 

  38. Swami KS, Sai Kiran P (2018) Secure data duplication with dynamic ownership management in cloud storage. J Adv Res Dyn Control Syst 10(12):753–761

    Google Scholar 

  39. Toosi AN, Calheiros RN, Thulasiram RK, Buyya R (2011) Resource provisioning policies to increase iaas provider’s profit in a federated cloud environment. In: 2011 IEEE International Conference on High Performance Computing and Communications. IEEE, Banff, p 279–287

  40. Walkowiak K (2016) Studies in systems, decision and control 56 modeling and optimization of cloud-ready and content-oriented networks, vol 56. Springer, Berlin. [Online] Available: http://www.springer.com/series/13304

    MATH  Google Scholar 

  41. Xiao Z, Song W, Chen Q (2012) Dynamic resource allocation using virtual machines for cloud computing environment. IEEE Trans Parallel Distrib Syst 24(6):1107–1117

    Article  Google Scholar 

  42. Xin Y, Xie ZQ, Yang J (2017) A load balance oriented cost efficient scheduling method for parallel tasks. J Netw Comput Appl 81:37–46

    Article  Google Scholar 

Download references

Funding

This work was supported by the Department of Cyber Security and Networks of Glasgow Caledonian University.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization by Douglas Oyakhire and Moses Ashawa; Methodology by Moses Ashawa; design by Douglas Oyakhire; Formal analysis by Moses Ashawa and Jude Osamor; Investigation by Moses Ashawa; Resources and data collection by Douglas Oyakhire; Writing and proofreading by Moses Ashawa and Riley Jackie; Validation by Jude Osamor; Funding Acquisition by Riley Jackie, Moses Ashawa, Jude Osamor. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Moses Ashawa.

Ethics declarations

Ethics approval and consent to participate

The research has consent for Ethical Approval and Consent to participate.

Consent for publication

Consent has been granted by all authors and there is no conflict.

Competing interests

There are no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article has been retracted. Please see the retraction notice for more detail: https://doi.org/10.1186/s13677-023-00562-z

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ashawa, M., Douglas, O., Osamor, J. et al. RETRACTED ARTICLE: Improving cloud efficiency through optimized resource allocation technique for load balancing using LSTM machine learning algorithm. J Cloud Comp 11, 87 (2022). https://doi.org/10.1186/s13677-022-00362-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13677-022-00362-x

Keywords