HeRAFC: Heuristic resource allocation and optimization in MultiFog-Cloud environment
By bringing computing capacity from a remote cloud environment closer to the user, fog computing is introduced. As a result, users can access the services from more nearby computing environments, resulting in better quality of service and lower ...
Highlights
- The resource allocation problem in a multifog-cloud environment is studied.
- The physical infrastructure of multifog-cloud is modeled using graph theory.
- The resource allocation problem is formulated with a mixed ILP model.
- The ...
ESMA: Towards elevating system happiness in a decentralized serverless edge computing framework
Due to the rapid growth in the adoption of numerous technologies, such as smartphones and the Internet of Things (IoT), edge and serverless computing have started gaining momentum in today's computing infrastructure. It has led to the production ...
Highlights
- Decentralized destination selection algorithm using matching theory.
- Optimized sum of rankings of all partners involved in matching.
- Overall increase in the happiness of the Edge-Serverless system.
- Drop in execution time and ...
Construction algorithms of fault-tolerant paths and disjoint paths in k-ary n-cube networks
With the increasing demand for information, the number of nodes in the network is increasing rapidly, followed by the problem of information loss and communication delay caused by insufficient communication. Therefore, how to effectively transmit ...
Highlights
- We propose a fault-tolerant path algorithm of Q n k to obtain a fault-free path between any two distinct fault-free nodes.
- We design two algorithms, called DPQ1 and DPQ2, to construct disjoint paths between any two distinct nodes of Q ...
Graph based routing algorithm for torus topology and its evaluation for the Angara interconnect
Several approaches and techniques exist to resolve load balancing problem in general and torus topology networks. Graph methods are natural ways to perform balancing of routing paths. A routing balancing algorithm must operate within the ...
Highlights
- A routing graph abstracts network routing rules.
- A deadlock-free routing algorithm is based on a fast single-source shortest path algorithm and optimized for a torus topology.
- A routing graph for the Angara network reflects the ...
A Seer knows best: Auto-tuned object storage shuffling for serverless analytics
Serverless platforms offer high resource elasticity and pay-as-you-go billing, making them a compelling choice for data analytics. To craft a “pure” serverless solution, the common practice is to transfer intermediate data between serverless ...
Highlights
- First time ever performance characterization of the IBM COS service.
- Performance modeling and analysis of the direct and multi-level shuffle methods.
- A serverless shuffle manager to dynamically choose the optimal method at runtime.
MLLess: Achieving cost efficiency in serverless machine learning training
Function-as-a-Service (FaaS) has raised a growing interest in how to “tame” serverless computing to enable domain-specific use cases such as data-intensive applications and machine learning (ML), to name a few. Recently, several systems have been ...
Highlights
- Implementation of a serverless system for training shallow machine learning models.
- Development of new optimizations tailored to the traits of serverless ML training.
- A thorough evaluation against Pytorch on a cluster with the same ...
Task scheduling optimization in heterogeneous cloud computing environments: A hybrid GA-GWO approach
Cloud computing, a technology providing flexible and scalable computing resources, faces a critical challenge in task scheduling, directly impacting system performance and customer satisfaction. The task scheduling problem's NP-completeness makes ...
Highlights
- A hybrid task scheduling strategy based on GA and GWO is proposed.
- Introducing mutation and crossover to preserve wolf diversity and boost local search in the algorithm.
- A novel fitness function is proposed considering multiple ...
SUARA: A scalable universal allreduce communication algorithm for acceleration of parallel deep learning applications
Parallel and distributed deep learning (PDNN) has become an effective strategy to reduce the long training times of large-scale deep neural networks. Mainstream PDNN software packages based on the message-passing interface (MPI) and employing ...
Highlights
- A novel scalable universal allreduce collective algorithm called SUARA.
- An optimized Open MPI SUARA implementation, SUARA2, with speedup O ( P ).
- 2x practical speedup of SUARA2 over native Open MPI allreduce for P = 1024 processes.
Privacy-preserving offloading scheme in multi-access mobile edge computing based on MADRL
With the development of industrialization and intelligence, the Industrial Internet of Things (IIoT) has gradually become the direction for traditional industries to transform into modern ones. In order to adapt to the emergence of a large number ...
Highlights
- Modeling for complex multi-user and multi-access offloading environment.
- Protection of users' offloading preferences with privacy entropy in consideration.
- Joint optimization scheme for QoS and privacy preservation in ...
Redactable consortium blockchain based on verifiable distributed chameleon hash functions
With the evolving application demands, the inherent immutability of consortium blockchains hinders their widespread adoption. For example, expired data stored on the chain cannot be deleted, and erroneous data cannot be redacted, seriously ...
Highlights
- This paper proposes a verifiable distributed chameleon hash function - VDCH.
- VDCH makes the process of computing hash collisions fault-tolerant.
- This paper proposes a threshold signature-based consensus protocol - CVTSS.
- The ...
Interference-aware opportunistic job placement for shared distributed deep learning clusters
Distributed deep learning frameworks facilitate large deep learning workloads. These frameworks support sharing one GPU device among multiple jobs to improve resource utilization. Modern deep learning training jobs consume a large amount of GPU ...
Highlights
- Introduce a stochastic model to describe memory oversharing of deep learning jobs on a shared GPU device.
- Introduce a novel interference-aware Opportunistic Job Placement Problem for shared distributed deep learning clusters.
- ...