[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
survey
Open access

Computational Resource Allocation in Fog Computing: A Comprehensive Survey

Published: 17 July 2023 Publication History

Abstract

Fog computing is a paradigm that allows the provisioning of computational resources and services at the edge of the network, closer to the end devices and users, complementing cloud computing. The heterogeneity and large number of devices are challenges to obtaining optimized resource allocation in this environment. Over time, some surveys have been presented on resource management in fog computing. However, they now lack a broader and deeper view about this subject, considering the recent publications. This article presents a systematic literature review with a focus on resource allocation for fog computing, and in a more comprehensive way than the existing works. The survey is based on 108 selected publications from 2012 to 2022. The analysis has exposed their main techniques, metrics used, evaluation tools, virtualization methods, architecture, and domains where the proposed solutions were applied. The results show an updated and comprehensive view about resource allocation in fog computing. The main challenges and open research questions are discussed, and a new fog computing resource management cycle is proposed.

1 Introduction

Fog computing has emerged as a promising solution to meet the growing demand for expansion of the processing, network, and storage capacity closer to end users, thus complementing the fragility of cloud computing [33]. However, as this is an emerging paradigm, there are several open research questions, with many challenges to be overcome [178]. Among these challenges is computational resource allocation, which aims to provide, in an appropriate manner, the computational resources for the service or application, so it can achieve the defined performance and Quality of Service (QoS) metrics [94]. Therefore, an analysis of the current state-of-the-art is relevant to assist both academia and industry in progressing work on this topic.
In a fog computing environment, the use of dynamic contexts such as the Internet of Things (IoT) is predominant, and adding the competition for allocated computational resources, this can lead the environment to have unpredictable events, such as service unavailability, high response time, and decreased reliability [46]. There are barriers to the use of resource allocation concepts applied in other computational paradigms, such as cloud computing. This requires the development of new proposals. Understanding the proposals already presented and the challenges that are still to be overcome is fundamental at this time.
Resource allocation is a step in the resource management process, which comprises several other steps in addition to allocation such as estimating, discovering and monitoring, for example. Over the years, some research has been published on resource management for fog computing [65, 75, 81]. However, as they take a more general approach, these surveys are not so specific in any of the stages, presenting a relevant but superficial conceptualization of each theme without a consensus on what are the steps that make up resource management. As fog computing has been evolving, a more specific work on each of the resource management steps, among them the allocation of resources, is already possible and is needed.
This work presents a systematic and comprehensive review of the literature on the computational resource allocation for fog computing from 2012 to 2022. To achieve this goal, six research questions were listed to guide the analysis and present the relevant results on this topic. In addition, once the important aspects of resource allocation are highlighted, the challenges of this approach are presented. Thus, the main contributions of this work are:
Analyze and compare the most relevant surveys on resource management for fog computing;
Define and delimit the scope and steps of resource management for fog computing;
Present the state-of-the-art regarding resource allocation for fog computing, since this process proved to be the most relevant in the evaluated articles;
Present the main metrics and techniques for resource allocation in a fog computing environment;
Discuss some of the relevant challenges to be overcome in the context of resource allocation in fog computing.
This work is organized as follows. Section 2 presents the main characteristics of fog computing, comparing it with some other similar distributed paradigms. In Section 3 the surveys on resource management are analyzed and the delimitation of the scope of this term for the development of this work is proposed. The methodology used for the elaboration of the systematic review, as well as the research questions that guided the development of this work, are presented in Section 4. The analysis of the selected papers is presented in Section 5. Related works on resource allocation are presented and compared with this work in Section 6. The challenges inherent to the theme are discussed in Section 7. Finally, Section 8 contains the conclusion and opportunities for future work.

2 Fog Computing

Fog computing was introduced in 2012 [33] with the aim of providing computing, storage, and network services between end-devices and cloud providers, complementing the resources when requirements cannot to be met with the traditional cloud services. As in nature where fog is a cloud closer to the ground, fog computing is intended to complement cloud computing, bringing computational improvements closer to end-users.
In the last years, the fog computing concept has been improved both by academia [46, 137, 181, 191] and industry [84, 145]. However, until now there is still no consensus about its definition, as well as some requirements related to it, such as scope, devices, architecture, service models, and so on. In our work [20], we state that fog computing is a distributed architecture that uses the computational resources of devices located between end-users and the cloud to optimize the processing and to reduce applications’ response times, meeting demands that until now were not possible. Besides fog computing, there are some other similar distributed paradigms, such as:
Edge computing: composed of devices in a fixed location that produce and consume data, participating in the processing, focusing on the end-device level, while fog computing focuses on the infrastructure level [46];
Multi-access Edge Computing: integrated with Radio Access Network (RAN) it is an implementation of edge computing to bring computational and storage capacities to the edge of the network [50];
Mobile Cloud Computing: a combination of cloud computing, mobile computing, and wireless communication where data storage and data processing occur outside of the mobile device [49];
Mobile Ad hoc Cloud Computing: a pool of devices that are closer to the user deployed over dynamic network addressing situations in Mobile Cloud Computing for which connectivity to cloud environments is not possible [22];
Mist Computing: defined as a lightweight form of fog computing, located on the edge of the network, using microcomputers and micro-controllers [153];
Dew Computing: the on-premises computational resources provide functionality that works regardless of internet connection, fully realizing the potential of on-premises computational resources and cloud services [154].
A comparative analysis between fog computing and these other related paradigms is presented in our work [20]. Therefore, the fog computing concept is broader and complete and can be considered as an umbrella which encompasses all these other similar paradigms [41]. Fog computing can base its communication infrastructure on Software-Defined Network (SDN) [134], on Radio Access Networks (RAN) [46], on Fog Radio Access Networks (F-RAN) [147], or on a composition of these technologies [137].
The most common architecture used to represent a fog computing environment is composed of three layers, namely IoT Layer, Fog Layer, and Cloud Layer, as presented in Figure 1. However, some authors presented hierarchical architectures with four [176], five [136], or even six [60] layers, varying only in the structuring of the Fog Layer and keeping the IoT Layer and the Cloud Layer. A comprehensive review of fog computing architectures can be found at [75]. Just as in our last work [44] but unlike other proposals that used hierarchical architectures with hard borders to delimit each layer, this work considers it is not possible to define where exactly the Fog Layer is delimited, since fog nodes can be found near the edge (e.g., IoT devices), and also near cloud computing, as in telecom devices, as demonstrated in Figure 1.
Fig. 1.
Fig. 1. Fog Computing architecture overview.
The IoT Layer represents the IoT devices connected at the edge of the network where the end-users can request services to be processed in the upper layers. The Fog Layer is positioned between the IoT devices and Cloud Computing, and provides the functionalities for processing applications, such as filtering and aggregation, before transferring the data to the cloud [11]. This layer is composed of nodes, commonly called Fog Nodes, i.e., any hardware device that has software and hardware resources with high communication capability. A comprehensive analysis of the computational perspective of a fog node can be found in the work [19].
Finally, the Cloud Layer is composed of cloud providers’ services, with more robust computational resources to process all requests made by the IoT Layer and which have not been fully met by the Fog Layer. The existence of the cloud is fundamental in a fog computing environment [11], since fog only complements and does not replaces cloud computing.
The most common characteristics to consider in fog computing environments are [84]: low latency, geographic distribution, heterogeneity, interoperability, real-time interactions, and scalability. Thus, fog computing is suitable for use when the cloud alone does not meet some requirements, such as low latency and runtime, required by applications [67].

3 Resource Management in Fog Computing

Resource management is relevant in several areas of research because it aims at the optimized use of available resources. This process is composed of steps that together intend to use a reasonable amount of computational resources and also to do it in an easy and efficient way [95]. The resource management process is widely discussed in fog computing since computational resources are often limited and must be well used [19], unlike cloud computing that gives the perception of having infinite resources, in the point of view of a single user.
Therefore, it is possible to find some literature review publications on resource management specifically for fog computing and related distributed paradigms. Table 1 shows a summary of the analyzed publications on this topic. As can be seen in Table 1, there is a divergence in the steps that compose the resource management process in fog computing available in literature, and each author considers different approaches and steps to contextualize it.
Table 1.
PaperYearSteps
EstimationDiscoveryAllocationPlacementMigrationSchedulingSharingOptimizationProvisioningCachingOffloadingPreprocessingCoordinationLoad BalancingBenchmarkingModelingMonitoringCompositionData ManagementDetectionSelectionMappingDistributionAssignment
[119]2017\(\checkmark\)\(\checkmark\)\(\checkmark\)
[178]2018\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[139]2018\(\checkmark\)\(\checkmark\)
[108]2018\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[65]2019\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[81]2019\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[89]2020\(\checkmark\)
[171]2020\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[31]2020\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[150]2020\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[121]2020\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[14]2020\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[76]2020\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[29]2020\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[132]2020\(\checkmark\)
[75]2020\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[120]2020\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[28]2020\(\checkmark\)\(\checkmark\)
[144]2021\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[128]2021\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[85]2021\(\checkmark\)\(\checkmark\)
[127]2021\(\checkmark\)\(\checkmark\)
[116]2021\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
[164]2021\(\checkmark\)\(\checkmark\)
[105]2021\(\checkmark\)
Table 1. Related Surveys
By detailing the definitions for all steps of each publication, it is possible to find the intersections among them, and consolidate them in a smaller but representative new group. For example: for Nath et al. [139] the resource allocation is when “different resources should be properly allocated to different devices”, but Mijuskovic et al. [128] consider this as the Placement step. This occurs with most of the definitions. To solve this confusion between the concepts and the different steps that each author assigned to the resource management process, a detailed analysis was carried out between the articles presented in Table 1. After this analysis, it was possible to group all 24 steps presented in Table 1 in just five steps, namely Discovery, Estimation, Allocation, Monitoring, and Orchestration and the whole scope of resource management in the fog computing environment can be represented by them, as presented in Figure 2.
Fig. 2.
Fig. 2. Resource management steps analysis.
Based on this proposal, the resource management process can be briefly explained. The Discovery step aims to find available resources in the fog environment. The Resource Estimation step defines the number of resources that will be needed for workload execution [120]. The Allocation step selects the resources that meet the requirements defined in the Estimation step, reserving, and delivering them to perform the tasks, meeting the Quality of Service (QoS) criteria defined previously [171]. Once the resource Allocation step is complete, the Monitoring process starts. It considers aspects such as elasticity, load-balancing, fault tolerance, health-check, and the like [43]. An Orchestration process must therefore be used in this cycle to provide complementary functions such as security, request management, communication management, and so on [44]. Each of the five steps is detailed in the next subsections.

3.1 Resource Estimation

One of the main requirements in managing computing resources is the ability to estimate how many resources will be needed to perform a task [178]. For Manvi and Shyam [123] this process is “a rough estimate of the real resources needed to run an application, usually with some thought or calculation involved”. Going a little deeper into this definition, Mahmud et al. [119] indicate that estimation is a process that assists in the allocation of appropriate computational resources, according to some policies and/or criteria, aiming to reach the determined level of QoS. Thus, given the restriction of computational capabilities of fog nodes, the estimation process plays a fundamental role in the allocation and optimal use of resources in fog computing [194].
The estimation for fog computing can involve perspectives other than computational resources [194], such as pricing [3, 27] and energy consumption [110]. In addition, resource estimation also depends on type of device, mobility, energy, types of data to be generated or processed, communication method, security, and also user behavior [31]. The estimation must be able to handle fluctuations in resource demand on both the provider and end-user sides. This is because resources can be mobile and therefore quickly become inaccessible, which makes them less reliable than cloud computing resources, for instance. Linked to this, the requester’s mobility must also be considered, which implies a sudden turnover of users, resulting in dynamic requests [178]. Considering this scenario, the resource estimation step should be performed with minimal overhead and high precision [139].
After analyzing all papers presented in Table 1 and considering all similar terms used that can be related to the resource estimation, as summarized in Figure 2, our proposed definition for this step is: “Resource estimation plays an essential role in the resource management process and refers to the calculation of the amount of computing resources and the time needed to perform tasks in fog computing environments”.

3.2 Resource Discovery

Manvi and Shyam [123] define the discovery process as identifying the list of authenticated computational resources that are available for submitting jobs, and choosing the best among them. For Singh and Chana [166], resource discovery is a process of identifying available resources and generating a list for later selection. Thus, resource discovery encompasses the actions of locating and disseminating resource information, this being essential to fully explore all resources distributed in the environment [197].
Considering the characteristics of fog computing, such as mobility, high geographic distribution, and also heterogeneity, the resource discovery process is considered a hard challenge and fundamental for the environment [47]. In Masip et al. [126] the resource discovery problem in fog computing is stated as a way of designing a solution to find resources belonging to components willing to join a fog environment. Such a solution must consider the different characteristics inherent to fog computing, such as mobility or collaborative models.
By analyzing the papers presented in Table 1, the Discovery step was grouped with the Detection step and both refer to the task of providing a collection of resources [128]. According to Javadpour et al. [89] there are five main classes of resource discovery mechanisms in fog computing: centralized, decentralized, peer-to-peer, hierarchical, and agent-based.
Based on these references, our proposed definition of the Resource Discovery step is the following: “Resource Discovery refers to the task that aims to find the available resources in the fog computing environment, while updating the resource catalog”.

3.3 Resource Monitoring

Monitoring is a relevant functionality and it is considered crucial to properly orchestrating fog services, and collecting updated status information about fog nodes and communication links [62]. A fog monitoring service can be composed of three essential functions [7]:
Observation: the acquisition of updated statuses of resource usage (e.g., CPU load and latency) or service performance (e.g., response time);
Data processing: is related to the necessary adjustments and transformation required on data and notifications derived from pre-configured rules and thresholds;
Data exposition: where the generated data is stored and how it can be accessed by a management system.
By analyzing the Table 1, the migration and the optimization steps have a similar approach to monitoring and that is why they were grouped in the proposal presented in Figure 2. A literature review and a taxonomy to resource monitoring in fog computing are presented in the work [43]. With this, our proposed definition of the resource monitoring step is: “Monitoring collects updated status information about fog nodes and communication links and sends it to the orchestrator, which can take proper action to guarantee the SLAs”.

3.4 Resource Orchestration

Fog orchestration is a management function responsible for the service life cycle. To provide requested services to the user and assure the Service Level Agreement (SLA), it must monitor the underlying infrastructure, react timeously to its changes, and comply with privacy and security rules [44].
However no paper analyzed in Table 1 used the term “orchestration” to define the steps in the resource management process. It is proposed in this paper, in Figure 2, as it would best meet the other terms used. For example: although Li et al. [108] call this the coordination step, they consider that this step “aims to coordinate various edge nodes”, which is the same purpose of orchestration.
Considering the analyzed papers, our definition for the fog orchestration step is: “Fog orchestration is a management function responsible for service life cycle. To provide requested services to the user and assure the SLAs, it must monitor the underlying infrastructure, react timeously to its changes and comply with privacy and security rules”.

3.5 Resource Allocation

Considering the steps presented in the Table 1 and grouped in Figure 2, the Allocation step plays an essential role in the resource management process, encompassing the largest number of terms and being discussed by most of the authors.
In Nath et al. [139] and in Luo et al. [116] resource allocation must deliver the most suitable resource considering the knowledge obtained from the previous steps, such as resource estimation or offloading. Both Mahmud et al. [119] and Mijuskovic et al. [128] state that resource allocation is a technique used to optimize resource utilization.
In fog computing, the allocation of resources aims to meet a set of tasks \(T = \lbrace T_1,T_2, \ldots ,T_n\rbrace\) that have different QoS requirements (such as cost and execution time), in a set of resources \(R = \lbrace R_1, R_2, \ldots ,R_m\rbrace\). These resources have different computational capacities, using an objective function as a criterion (such as minimizing cost, maximizing usage, etc.) [65]. The Resource Allocation step is represented in Figure 3. The resource allocation can be seen as the step that receives estimation information and makes the effective reservation of computational resources so that the task can be executed in the fog computing environment, according to the QoS requirements.
Fig. 3.
Fig. 3. Resource allocation step.
As with any process, resource allocation has inputs and outputs. Inputs come from the previous steps, which in this resource management proposal is the Estimation step. These entries indicate QoS metrics as well as other requirements (e.g., computational). The output of resource allocation is made up of the allocation technique, virtualization method, and the layers of architecture to be covered. An overview of the resource allocation is presented in Figure 4.
Fig. 4.
Fig. 4. Resource allocation flow overview.
Considering both Figure 3 and Figure 4 it is possible to propose our definition of the Resource Allocation step: “Resource allocation refers to the step in the resource management process that aims to select, reserve, and use the best available resources to execute a workload in the fog environment, while respecting QoS parameters”. Based on this definition and understanding that it is necessary to be more familiar with proposals regarding the allocation of resources existing in the literature, it was possible to determine the Research Questions (RQs) to be answered by this work. This is important to help to determine what is being researched, what results should be achieved, and guide the systematic literature review. To achieve this goal, in this article, six RQs were formulated, as follows:
RQ1: What metrics are used? - this question aims to find the most used metrics and requirements passed as input in the resource allocation flow and help to ascertain their meaning;
RQ2: What techniques are used? - the answer to this question will show the most common techniques used in literature until now, indicate possible trends, and gaps to be studied;
RQ3: Which architecture layers are considered? - considering fog computing architecture presented in Figure 1, the answers to this question will show the trends in fog computing and the relevance of each one of them in the resource allocation approaches;
RQ4: What virtualization techniques are used? - this research question will help to categorize the proposals according to virtualization techniques most used;
RQ5: How are the proposed approaches evaluated? - this question aims to present the ways in which the proposals for resource allocation are currently being evaluated by the papers in the literature;
RQ6: Which are the most used use cases? - this question will help to identify the most common domains where fog computing is being applied nowadays, and highlight new areas to be studied.
The first research question (RQ1) refers to the metrics and requirements passed as input. The techniques (RQ2), the covered layers in architecture (RQ3), and the virtualization model (RQ4) form the core of the Resource Allocation step and are essential to deliver the allocated resource as output. Finally, RQ5 and RQ6 were chosen to present the main evaluation tools used to validate the resource allocation proposals and the most commonly applied use cases, respectively.

4 Research Methodology

The methodology of Systematic Literature Review (SLR) adopted in this work was based on the works [100] and [149]. The steps for the development of this research are summarized in Figure 5.
Fig. 5.
Fig. 5. Steps of SLR methodology used [100] and [149].
In the first stage, Research Questions (RQ) were defined (Section 3.5), aiming to determine what is being researched, as well as to define which results should be achieved with the systematic review. In addition, research questions also serve to guide the research [100]. These questions were the starting point for search string elaboration, as well as assisting in assessment of the publications. The results are expected to answer these questions.
After the research questions were formulated, the search process included the selection of the online search databases that were used, the period and the search string elaboration. For this article, Scopus,1 Web of Science,2 ACM Digital Library3 and IEEE XploreLibrary4 databases were used as research sources. These databases were chosen because they bring publications from different publishers, such as Elsevier, IEEE, ACM, MDPI, and the like. The period between 2012 (year of the first publication on fog computing [33]) and August 2022 was also specified. The following search string was used “TITLE-ABS-KEY((fog) AND (“resource management” OR “resource allocation” OR “resource placement” OR “resource scheduling” OR “resource sharing” OR “resource caching” OR “resource offloading” OR “resource selection” OR “resource load balancing” OR “resource provisioning”))”.
After using the search string in the related online databases, it was necessary to apply the inclusion and exclusion criteria. Therefore, the inclusion criteria defined for this research were: publications written in English; peer-reviewed publications; and, that present architectural models, techniques or methods applied to computational resource allocation, specifically, in fog computing. Likewise, exclusion criteria were those that do not meet any of the inclusion criteria listed above, as well as duplicate articles in the databases, articles that are not related to fog computing, or publications that are not oriented to computational resource allocation processes. Initially, the research carried out in the databases returned a total of 1,182 publications. Applying the inclusion and exclusion criteria and considering the contents found in the titles and abstracts of these publications, this number was reduced to 203 publications. A second filter was applied considering the article full text adherence, resulting in 108 publications to be analyzed.
Considering the 108 analyzed publications, a relevant fact is that 33 of them (about 31%) were presented in the last 3 years (2020, 2021, and 2022). This reaffirms the hypothesis that fog computing is a computational paradigm still in development in academia [180]. Another relevant point is that considering only the article title, the term “resource allocation” was the most found when compared with all other terms searched in the search string, representing about 60% of them. Finally, a full text reading and analysis of each one of the 108 selected papers was performed to help to answer the six research questions. The results are presented in the next section.

5 Results

This section presents the results obtained from the 108 selected publications. Figure 6 presents an overview of the research questions to be discussed in this section.
Fig. 6.
Fig. 6. Research questions overview.

5.1 Resource Allocation Metrics

The first Research Question (RQ1) to be answered refers to the most used metrics for resource allocation in a fog computing environment. A metric is almost always linked to a criterion, giving rise to the allocation objective, which in turn is indicated in terms of QoS [144]. For example, the metric “cost” is linked to the criterion of “reduction”, generating the objective of “minimizing the cost”.
Analyzing the selected publications, the following resource allocation metrics were observed: resource utilization, cost, latency, energy, user experience, and execution time. They are detailed in Table 2, which also gives the percentage of the analyzed papers that each metric represents.
Table 2.
MetricPapersDescription%
Resource Utilization[10, 12, 17, 23, 25, 26, 56, 69, 78, 96, 101, 104, 112, 129, 131, 161, 168, 185, 186, 188, 193]Maximize the use and availability of resources, considering the adequate mapping of these resources and the correct distribution and allocation [135].19.4%
Cost[15, 16, 27, 38, 48, 56, 61, 68, 70, 80, 82, 86, 90, 91, 93, 98, 102, 107, 113, 114, 117, 138, 140, 142, 167, 168, 172, 174, 190, 198, 200]Minimize the cost of allocating resources considering the infrastructure implementation, the operation costs, and cost of allocating instances [120]. .28.7%
Latency[8, 45, 54, 56, 64, 66, 72, 79, 98, 106, 107, 109, 115, 133, 141, 155, 158, 160, 169, 170, 175, 177, 182, 188, 201, 203]It refers to the communication delay to transfer data between resources added to the time taken for the task to be processed [87].24.1%
Energy[40, 45, 63, 66, 110, 111, 138, 155, 156, 175, 187, 195, 204]Fog computing devices can use both renewable and non-renewable energy. For this reason, the techniques for resource allocation with focus on energy efficiency aim to minimize energy consumption. Therefore, energy efficiency is one of the main metrics in this context and much research has been done to achieve this metric [120].12.0%
User Experience[1, 2, 3, 4, 5, 32, 51, 92, 99, 161, 179, 189, 202]Consider that demands can change during the execution of tasks and that service quality levels must be guaranteed throughout the entire service life cycle [120]. The QoE is impacted by both quantifiable and non-quantifiable attributes.12.0%
Execution Time[13, 39, 48, 58, 59, 71, 73, 78, 80, 83, 88, 90, 102, 109, 122, 130, 148, 152, 174, 183, 184, 187, 192, 199]The execution time metric is linked to the objective of reducing the delivery time and to meet the specified deadline for workload execution [120]. It can also be referenced as makespan, runtime, throughput, or deadline [87]. In fog computing many of the approaches that aim at reducing execution time are linked to reducing the latency or the energy consumption.22.2%
Table 2. Resource Allocation Metrics
From the analysis of the publications presented in Table 2 and answering RQ1, it was possible to identify that the most addressed metric in resource allocation is cost, in 31 of the 108 publications (28.7%). In fact, reducing cost is an objective that can intersect with several other metrics, such as reducing energy consumption, reducing latency, or even reducing execution time. In this sense, it was noted that some papers appear in two different classifications [45, 48, 66, 78, 80, 90, 98, 102, 107, 109, 138, 155, 161, 168, 174, 175, 187, 188] which is possible when the authors of these papers combine different metrics in their proposals. Etemadi et al. [56], for example, proposed a resource allocation model that combined three metrics - cost, latency, and resource utilization - aiming to reduce the total cost and delay violation, and increase the fog node utilization. When grouped, these metrics become even more relevant for the resource allocation field in fog computing, as they optimize resource utilization, thus avoiding the waste of computational power in fog computing since the fog nodes usually offer limited resources [135].
Reduce latency is also a very significant metric in fog computing when compared to other computing paradigms, such as cloud computing [33], and this was confirmed by the results when they presented this metric as one of the most used in the analyzed articles (24.1% of them). The demand for fog computing is usually linked to the need to reduce application response time and latency has a direct impact on this. In this way, reducing latency is a metric that can be considered inherent to the resource allocation process in fog computing, and must be achieved by all resource allocation proposals, since this is an essential feature of this paradigm [67].
In addition to the metrics listed in this section, some other metrics could be proposed. The allocation time, for instance, is considered a relevant metric. This is because this metric refers to the time needed to receive the workload information, estimate, and verify the necessary resources and make the allocation effective. Considering the mobility and dynamics of fog applications, an optimized allocation time plays a fundamental role in ensuring a high user experience and in achieving the required service quality [75].
Other metrics that could be used are related to the time and number of migrations between the fog and cloud nodes, as an efficient allocation should keep the quantity of migrations as low as possible. The use of predictive algorithms is proposed, as they can assess how stable the allocated resources are, aiming to ensure that the resources will be available until the end of workload execution.

5.2 Resource Allocation Techniques

RQ2 asks which are the most used techniques in the analyzed papers for resource allocation in fog computing. Table 3 shows the techniques used in the analyzed papers, grouped in Integer Linear Programming (ILP) / Nonlinear Programming (NLP), Heuristics, Meta-heuristics, Fit-based approaches, Multiple Criteria Decision Making (MCDM), Game-based approaches, and Machine Learning.
Table 3.
TechniquePapersDescription%
ILP / NLP[15, 17, 40, 58, 59, 61, 70, 104, 110, 111, 133, 158, 160, 161, 167, 170, 172, 179, 183, 190, 192, 195, 201]ILP solves optimization problems that have restrictions, with objective function linear in relation to the control variables. NLP solves an optimization problem where some of the constraints or the objective function are non-linear [30].21.3%
Heuristic[1, 2, 3, 4, 5, 8, 27, 32, 38, 48, 63, 70, 73, 79, 82, 86, 88, 92, 102, 109, 113, 114, 115, 117, 122, 140, 142, 156, 177, 185, 187, 193, 202, 203]Heuristic algorithms are those that do not guarantee to find the optimal solution to a problem, but are able to return a quality solution in a time suitable for the application needs.31.5%
Meta-Heuristic[10, 16, 66, 72, 111, 138, 148, 175, 186, 204]Aims to find optimal or near-optimal solutions for the resource allocation problem in fog computing within a reasonable duration of time.9.2%
Fit[90, 141, 152, 169, 190]Fit-based algorithms are those that direct solutions to the parameter defined in its programming, such as best allocation, best fit, shortest job, or first fit [21].4.6%
MCDM[12, 13, 23, 25, 26, 51, 54, 78, 83, 107, 112, 129, 130, 182]Is composed by decision support methods for several criteria, indicating adaptive models for tasks of scheduling, selection, and allocation of resources, assigning weights to their criteria as a way of choosing the resources that were more coherent to the QoS required.12.9%
Game[71, 80, 91, 93, 96, 99, 101, 184, 189, 199, 200]Uses mathematical models to make optimal decisions under conflicting conditions. Each player has interest or preferences for each situation in the game [6].10.2%
Machine Learning[17, 39, 45, 56, 58, 64, 68, 69, 98, 106, 131, 155, 168, 174, 188, 198]Aims to explore, analyze, and find meaning in complex data sets. They encompass other techniques such as Deep Learning and Reinforcement Learning.14.8%
Table 3. Resource Allocation Techniques
Linear Programming consists of methods to solve optimization problems that have some restrictions (injunctions), the objective function being linear in relation to the control variables and the domain of these variables is composed of a system of linear inequalities [205]. The main advantage of Linear Programming is the flexibility to analyze complex problems [159]. A Linear Programming approach was used in the papers: [15, 40, 59, 61, 104, 111, 133, 158, 160, 167, 170, 172, 179, 183, 192, 195, 201], that represent 15.7% of the total analyzed papers. It is important to highlight that most of these proposals aim to meet only one metric (i.e., reduce latency in [158]), since this is the result of the linear objective function. In three other papers (2.7% of analyzed papers) [17, 110, 161] a Mixed Integer Linear Programming (MILP), which is an extension of ILP for when some decision variables are not discrete, was used to solve the resource allocation problem.
Nonlinear Programming, on the other hand, is the process of solving an optimization problem defined by a system of equalities and inequalities, called restrictions, over a set of real variables whose values are unknown, with an objective function to be maximized or minimized, and where some of the constraints or the objective function are non-linear [30]. This type of approach was used to address the resource allocation problem in fog computing in the seven publications: [15, 58, 70, 104, 111, 183, 201]. Among these works, Fan et al. [58] used, in addition to Nonlinear Programming, a Markov Decision Process technique to optimize the results for the resource allocation process in their proposal.
Heuristics were the most used techniques in the analyzed papers. In this type of solution, decisions are based only on the information available, without worrying about the future effects of such decisions, thus making the locally optimal choice at each stage of execution. The goal is to find a good, and not necessarily optimal, global solution. This type of approach suits the fog computing model as it supports the dynamics of the environment well, resulting from the essential characteristics of high geographic distribution, heterogeneity, and interoperability. Therefore, they are considered easy to implement and efficient [34]. In the analyzed papers on resource allocation in fog computing, some authors (12%) proposed new heuristic algorithms, as in [27, 63, 73, 79, 88, 102, 115, 122, 156, 177, 187, 193, 202]. Also some well-known heuristics methods were used, such as the Price-based approaches (10.1% of total) [1, 2, 3, 4, 5, 86, 92, 117, 140, 142, 203], Greedy algorithms (2.7%) [32, 82, 114], the Lyapunov optimization approach (2.7%) [8, 38, 109], and the Hungarian algorithm (0.92%) [185].
Similarly, meta-heuristic techniques combine basic heuristics at a higher structure level, but also aim to find an optimal or near optimal solution in a limited execution time, which is very relevant in fog computing environments given its dynamism and mobility characteristics. Within this category are Evolutionary Algorithms, which are based on the principles of natural evolution, maintaining a population of candidates for the solution throughout the research [53]. After initialization, new solutions are generated iteratively, selecting good solutions from the population, crossing, and mutating. New individuals are also evaluated and inserted into the population, usually replacing the worst solutions. The algorithm is normally interrupted after a certain number of iterations, returning the best solution found in that period [53]. Evolutionary algorithms for resource allocation in fog computing environments were used in seven papers, using the following algorithms: Elitist Selection Strategy [138] and [148], Pigeon Inspired Optimization [16], Weighted Sum Genetic Algorithm [72], Hungarian Algorithm [10], Directed Acyclic Graph [175], and Estimation of Distribution Algorithm [186]. Besides this, other meta-heuristic techniques were found in the reviewed papers, such as the Particle Swarm Optimization algorithm used in [66] and Ant Colony Optimization adopted in [204].
Considering the Fit-based approaches, the First-Fit algorithm was used in three papers [141, 152, 169]. In this technique, the allocation problem is solved by providing the first resource that delivers the requested parameters, regardless of whether there were better options. In this sense, the Shortest Job First algorithm used in [90] prioritizes the allocation of the smallest requests. Only the Best-fit algorithm presented in [190] looks for the best allocation considering the inputs and the available resources. Although these proposals are valid for some fog computing scenarios, they may be ineffective in environments that have high demand, that is, a large number of requests from IoT devices, for example.
With regard to MCDM techniques, in [129] the authors used the PROMETHEE method (Preference Ranking Organization Method for Enrichment Evaluation), while in [107] and [112] the authors used the ELECTRE method (Elimination Et Choice Translating Reality). The method called AHP was used in [51, 54, 130, 182]. Although they were able to meet some established QoS criteria, the methods were not able to guarantee that the minimum requirements were met. The same occurs with the method known as TOPSIS, which was used in [23, 25, 26, 83], and is also limited in its ability to achieve refined delivery quality.
The game-based approach uses mathematical models to make optimal decisions under conflicting conditions. With that, a basic element is the set of players who participate in it, and each player has strategies. The choice of a strategy reflects a situation among all possible situations. Each player has an interest or preferences for each situation in the game [6]. Between the analyzed publications, the Stackelberg game theory [143], in which the leader moves first, and the other players move in sequence was the most used (2.7% of total papers) [80, 93, 199]. Unlike proposals based on MCDM, these techniques are able to better find a way around the variations and restrictions of the fog computing environment, and therefore achieve interesting results in the resource allocation process for this paradigm.
Finally, Machine Learning techniques involve algorithms that aim to learn the fog environment, the users, and the resource allocation behaviors in order to predict new requests. In this context, there are Deep Learning techniques [188, 198], Deep Reinforcement Learning [68, 69, 98, 106], Bayesian learning [56, 168], and Fuzzy Logic [64, 155], that goes beyond the limits of Machine Learning and enters the Artificial Intelligence field [196]. Analysis of the results obtained in this survey shows that there has been an increase in the use of these techniques in recent years for proposals for resource allocation in fog computing, since 70% of them were proposed in the last 3 years. Although they may require greater computational power for their execution, given the need to process historical series and large volumes of data, they are more assertive in choosing the best resource to be allocated, even when considering different input parameters.

5.3 Covered Architecture Layers

Analysis of which layers of architecture are considered in resource allocation approaches in fog computing is necessary to answer research question RQ3. Most of the analyzed papers considered the fog architecture divided into three layers (IoT, Fog, and Cloud) in their proposals, as presented in Section 2. Therefore, the resource allocation in fog computing can be considered as a double correspondence problem [65]. Approaches to solving this problem may involve only the Fog Layer, or the communication between two (IoT x Fog or Fog x Cloud), or even three layers (IoT x Fog x Cloud) of the fog architecture. The publications are classified in these scenarios in Table 4.
Table 4.
Covered LayersPapersDescription%
Fog[8, 54, 58, 66, 73, 98, 107, 112, 113, 114, 129, 148, 160, 161, 167, 179, 184, 186, 192, 193, 198, 203]Uses only the resources available in the Fog Layer20.4%
Cloud - Fog[17, 45, 48, 71, 72, 90, 92, 93, 102, 158, 174]Uses the resources allocated in the two higher layers in the three-tier architecture10.2%
IoT - Fog[38, 59, 61, 80, 82, 86, 106, 189, 201, 204]Uses the resources allocated in the two lower layers in the three-tier architecture9.3%
3-Layers[1, 3, 4, 5, 10, 12, 15, 16, 23, 25, 26, 51, 56, 63, 64, 68, 69, 78, 79, 83, 88, 101, 104, 109, 117, 122, 130, 133, 138, 141, 152, 155, 156, 168, 169, 170, 175, 177, 182, 183, 187]Uses the resources available in all layers in the three-tier architecture49.1%
Table 4. Covered Architecture Layers
There are approaches that consider only the link formed between end-user devices, located in the IoT Layer, and devices in the Fog Layer. Once this link is established, they will connect to the Cloud Layer as a single resource group. Although these approaches do not disregard the existence and the relationship of the Fog Layer with the Cloud Layer, they seek alternatives to address all resource allocation problems in the Fog Layer, avoiding forwarding requests to the Cloud Layer.
Some analyzed publications focused on the connection formed between the Fog Layer devices and the Cloud Layer ones. The resources made available through this relationship allowed end users to run their workloads on them. The papers in this context consider that the services required by end-users, who are in the IoT Layer, have already been distributed in the Fog Layer and based on this premise, need to be provisioned using the resources available in that layer and in the Cloud Layer.
There are also some papers that apply their proposals considering only the Fog Layer. In this scenario, all requests must to be solved by the resources available in the fog environment. However, this is not usual since fog computing is complementary to the cloud and not a paradigm to replace it.
Finally, most works analyzed in this article had their proposals addressed using all three layers of fog computing architecture. This is the most common approach because it considers that the Fog Layer is only an intermediate layer to reach the objectives defined (for example, improving QoS). The requests are generated in the IoT Layer and are fully met using not only the Fog Layer, but also Cloud Layer. This makes sense since, as indicated in the definitions of fog computing presented in Section 2, this paradigm is intended to complement and not to substitute the cloud computing. In total, 53 out of the 108 analyzed publications (49%) considered all three fog layers to develop their proposals.

5.4 Virtualization Models

RQ4 aims to identify the virtualization models used in resource allocation approaches in fog computing. It is important to emphasize that in fog computing the availability of resources (such as processing, memory, storage, and network) is essential. Unlike cloud computing, where resources are always available, in the fog there is a strong constraint on resources, as they are often devices with low computational capacity. A switch, for example, has the main function of managing network connections, but it is used by the fog layer to provide its unused computing resources for processing, storage, and so on. Given the above, the virtualization models used in the analyzed studies can be grouped into two categories, as presented in Table 5 and discussed below.
Table 5.
VirtualizationPapersDescription%
VM[1, 2, 3, 4, 5, 10, 13, 15, 16, 26, 45, 56, 61, 66, 70, 78, 88, 90, 91, 102, 109, 111, 115, 117, 122, 133, 155, 156, 168, 172, 175, 187, 190, 193]Exploits virtualization at the hardware level so multiple operating systems can run independently on a single physical resource.31.5%
Container[8, 12, 69, 92, 96, 115, 138, 160, 161, 188, 193]Isolates the processes with just the necessary application packages.10.2%
Table 5. Virtualization Models
The Virtual Machine (VM) concept is widely used, as it exploits virtualization at the hardware level so that multiple operating systems can run independently on a single physical resource. VM instances are executed on an abstraction layer called hypervisor, which allows the sharing of hardware between different instances [120]. The container is a type of virtualization which is lighter when compared to virtual machines and offers virtualization at the operational level [120]. Containers isolate processes with just the necessary application packages and are highly portable across multiple nodes of fog computing. In [193] the authors present some advantages in the use of containers when compared to virtual machines, as follows: containers start faster than VMs because hypervisors are not required, containers are better than VMs in terms of performance, and the greater the number of VMs deployed on a server, the higher the performance degradation of the server.
The use of an adequate virtualization model is fundamental for the performance of the application and the achievement of the objectives indicated in the QoS [19]. Undoubtedly, the use of VMs can be the best option in some use cases, such as those that require greater isolation of the application or service. However, according to the analyzed articles, the use of containers is more adherent to the resource allocation field in fog computing, since it is lighter and more dynamic compared to virtual machines, favoring mobility, and better adapting to resource constraints, which are characteristics of this computational model [125].
In two of the analyzed papers [115] and [193] both virtualization models - virtual machine and container - were used to better apply the resource allocation method proposed by them. Finally, none of the analyzed papers uses unikernel, which is a virtualization model that is even more lightweight than containers [118], which is a gap to be exploited in new proposals to achieve an efficient resource allocation.

5.5 Fog Computing Proposals Evaluation

Simulation tools and models are used to evaluate the work to bring the system closer to the real environment. A model is a representation of an actual or planned system [173]. Simulators are used to study the behavior of the system and understand the factors that affect its performance as it evolves over time [124]. Simulation frameworks provide solutions in cases where mathematical modeling techniques are difficult or impossible to apply due to the scale, complexity, and heterogeneity of a fog computing system [173]. Simulation is a way to imitate the operation of real systems, with the freedom to modify the inputs and model a series of characteristics, analyze existing systems, or support the design of new ones. It helps identify and balance the cost [24].
Some simulators that were already used to validate studies in cloud computing have been adapted for fog computing. In addition, new simulators have been specifically designed to meet the demand of fog computing. A detailed analysis of several simulators for fog computing was presented in [124]. This section aims to answer RQ5, which addresses the way that the proposed approaches were evaluated. An analysis of the simulators that were used in the selected publications is presented in Table 6.
Table 6.
EnvironmentPapersDescription%
CloudSim[1, 2, 3, 4, 5, 10, 13, 27, 102, 109, 122, 156, 204]Proposed to simulate cloud computing services [36]12.0%
iFogSim[45, 56, 66, 69, 88, 101, 130, 141, 167, 168, 172, 175]An extension of the CloudSim especially to Fog Computing [74]11.1%
GridSim[131]Allows the modeling and simulation of application models for grid computing [35]0.9%
Numerical (Matlab)[15, 16, 23, 25, 26, 38, 51, 54, 58, 59, 61, 64, 71, 73, 78, 80, 82, 83, 86, 90, 91, 99, 104, 106, 107, 111, 112, 113, 117, 133, 140, 155, 177, 179, 182, 183, 184, 185, 186, 187, 189, 190, 192, 195, 201]Used to study the behavior of systems whose mathematical models are too complex to provide analytical solutions41.7%
Stand-alone[8, 17, 32, 40, 48, 63, 70, 72, 92, 93, 110, 114, 115, 129, 138, 142, 148, 152, 160, 161, 169, 170, 174, 188, 193, 198, 199, 202, 203]Hardware environment and synthetic datasets designed just for the execution of tests.26.9%
Table 6. Simulation Frameworks
The majority of analyzed papers used numerical simulators to validate their proposals. This type of simulation is used to study the behavior of systems whose mathematical models are too complex to provide analytical solutions, as in many non-linear systems. This is the situation found in many proposals that addressed allocation resources in fog computing. The most common simulator was Matlab [163], used by 45 of the total of 108 publications, that is, about 42%.
CloudSim [36] was proposed to simulate cloud computing services. It is a library for cloud computing simulation developed in Java language, where each entity is represented as a class. Therefore, most of the works that used CloudSim presented a solution with integration with computing cloud environments, justifying the use of this simulator. An extension of the CloudSim simulator is iFogSim [74]. This simulator allows one to model IoT and fog environments to measure the impact of the proposed techniques for resource management in terms of latency, network congestion, energy consumption, and cost. Considering that it was only presented in 2017 [74], and considering the time required for its maturation and greater use, this simulator has come to be used more recently by academics.
In a less representative way, some analyzed studies used other simulators to evaluate their proposals. The GridSim simulator [35], which allows the modeling and simulation of application models for grid computing, was used in [131]. Finally, about 27% of the publications were evaluated in test environments built specifically to validate the paper’s proposal. In this type of test, all the software and the hardware are configured in a stand-alone way, using synthetic data sets.
The predominance of the use of simulators and mathematical models can be seen as a weakness in the evaluation of the proposals for the resource allocation process, since these mathematical models can hide unexpected behaviors of fog computing, mainly when considering the heterogeneity and mobility characteristics of these environments.

5.6 Fog Computing Domains

From 108 analyzed papers, just 20 of them (18%) indicated a specific domain their proposals are applied to. Thus, to address RQ6, these domains are detailed in Table 7.
Table 7.
DomainPapersDescription%
Health[64, 70]Medical Cyber-Physical Systems (MCPS) allows continuous and intelligent interaction between computational elements and medical devices needing low latency approaches1.9%
Smart Buildings[15, 66, 90, 158, 160, 161, 185, 193]As fog computing is closer to IoT devices, it is widely used in smart cities, buildings, and industry projects [194] processing data and providing fast response7.4%
Vehicular[26, 39, 106, 148, 177, 188, 201, 203]Vehicles must be able to calculate, store, and communicate to other vehicles or devices, improve safety, convenience, and driving satisfaction [103]7.4%
Virtual Reality[12, 79]It is an advanced interface technology between a user and the computer creating an environment closer to the person’s reality with visual, sound, and tactile effects1.9%
Table 7. Fog Computing Domains
In recent years, there has been growing attention to systems that support the development of vehicular networks. This is because vehicles have been increasingly equipped with powerful on-board computers, large capacity data storage units, and more advanced communication modules to improve safety, convenience, and driving satisfaction [103]. These vehicles must be able to calculate, store, and communicate with other vehicles or devices. The features and benefits of fog computing are totally adherent to this type of service, as vehicles have high mobility and some services, such as autonomous vehicles, and require a very low response time to be effective and safe. For this reason, resource allocation proposals for this type of domain must be prioritized by the execution time.
A trend in the health area is the use of Medical Cyber-Physical Systems (MCPS), which allow a continuous and intelligent interaction between computational elements and medical devices (e.g., heart-rate monitors) [70]. However, considering the complexity and high quality of required services, these devices need low latency and other criteria for communicating with the cloud computing platform. Therefore, fog computing is a promising approach for the use of these resources, because the proposals for resource allocation that used this domain have focused on the latency reduction metric.
As fog computing is closer to IoT devices, it is widely used in smart cities, buildings, and industry projects [194]. A smart building is one that is responsive to the requirements of occupants, organisations, and society. It also needs to be sustainable (energy and water consumption), healthy (well-being of the people living and working within it), and functional (user needs) [42]. A smart building use case was utilized in the papers [26], [66], [161], and [90] to address the resource allocation proposal. Like smart building, smart manufacturing is also a use-case that can take advantage of fog computing benefits, relying on the characteristics of high geographic distribution and heterogeneity of fog devices to seek an optimized resource allocation for the execution of applications and services.
Finally, Virtual Reality was used in two analyzed papers to explain the resource allocation use. As the need for these types of applications that require low latency is increasing, it is expected that new use-cases for fog computing will appear in the next years. The papers analyzed in this survey that used this domain have focused their proposals for resource allocation on latency reduction and on resource utilization metrics.

6 Related Work

As fog computing develops, the number of publications on this subject grows. In addition to surveys with a large scope in resource management, such as those presented in Table 8, it is already possible to find works on specific steps of the resource management process, such as discovery [97], monitoring [43], and orchestration [44].
Table 8.
PaperYearAnalyzed ArticlesPeriodNumber of RQMethodologyMetricsTechniquesEvaluationVirtualizationArchitectureDomainsChallenges
[165]201952017 - 2019--\(\checkmark\)\(\checkmark\)-----
[146]2020172015 - 2018--\(\checkmark\)\(\checkmark\)-----
[127]2020282016 - 2019--\(\checkmark\)\(\checkmark\)\(\checkmark\)--\(\checkmark\)-
[9]2021252017 - 20203\(\checkmark\)\(\checkmark\)\(\checkmark\)--\(\checkmark\)--
[52]202120-------\(\checkmark\)\(\checkmark\)-
[151]2021102016 - 2019--\(\checkmark\)\(\checkmark\)\(\checkmark\)----
[87]202249---\(\checkmark\)\(\checkmark\)\(\checkmark\)--\(\checkmark\)\(\checkmark\)
[57]202234-6-\(\checkmark\)\(\checkmark\)\(\checkmark\)----
This Work20211082012 - 20226\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)\(\checkmark\)
Table 8. Related Works
Following this trend, considering our resource management proposal is composed of only five steps, and aiming to contribute to more in-depth research on a specific step, this article will focus on the analysis of publications on the Resource Allocation step. This step is the one that appeared most frequently in the surveys examined in Table 1, and summarized in Figure 2.
The works analyzed in this section are summarized in Table 8. They were compared considering the number and the period of the analyzed papers, the number of research questions, and the description of the methodology applied in the survey. It was also analyzed if they brought information about the six RQs defined in this current work, that is: metrics (RQ1), techniques (RQ2), architecture layers used (RQ3), virtualization applied (RQ4), evaluation methods (RQ5), and domains (RQ6). Finally, there was an evaluation of whether the work considered the challenges inherent to the Resource Allocation step.
Sindhu and Prakash [165] presented a survey of both resource allocation and task scheduling in fog computing based on IoT Applications. However, only five methods (or articles) were analyzed, and with this, little information has been added about the resource allocation step. In Patil-Karpe et al. [146] a review of resource allocation in fog computing is done by analyzing 17 articles, presenting the objectives and scope of each method. No information about the review methodology used and the challenges of the resource allocation problem was given.
A survey about scheduling is presented by Matrouk and Alatoun [127], classifying the scheduling problems into five categories: task scheduling, resource scheduling, resource allocation, job scheduling, and workflow scheduling. A comprehensive analysis of 28 selected papers, including information about the algorithm and the metric used in each one was presented. The survey partially reviews the scheduling approaches and does not include the recent studies as well.
Ahmed and Zeebaree [9] provided a systematic analysis of fog computing focusing on system models and resource allocation. The article is supported by three research questions that aimed to answer the relevancy, the metrics, and the goals of the resource allocation step, by analyzing 25 selected papers. However, the survey did not review the virtualization methods, nor the challenges related to resource allocation in fog computing.
In the paper [52], Dumitru et al. presented a review on resource allocation in both fog and edge computing focusing on the design and deployment of enterprise systems, such as medicine, automotive, and smart homes. However, no description of the research method was provided, nor details about the number of articles analyzed. Rahul and Aron [151] presented a review of 10 publications about the fog computing resource allocation problem, highlighting the performance metrics, goals, and the simulation environment of each one. Details about the methodology used to perform the review were not presented.
The authors Jamil et al. [87] provided a survey considering the Scheduling and the Resource Allocation steps in fog computing with a focus on learning-based dynamic algorithms. However, no description of the research method was provided, and information about the number and the period of the analyzed papers was missing. Finally, the work presented by Fahimullah et al. [57] brings a literature review about the use of machine learning techniques in fog computing considering six resource management steps: provisioning, placement, scheduling, allocation, offloading, and load balancing. Some metrics and simulations tools were also presented. The survey partially reviews the resource management approaches since only the machine learning techniques are considered.
By analyzing the papers presented in this section, it is apparent that all the existing surveys specifically about resource allocation (and related terms) to fog computing environments, are less comprehensive than this article. This can be explained by the fact that the period covered (10 years) by this paper is longer than in the others. Also, the number of analyzed articles is more than double those analyzed in the other papers. Finally, it is the only one that covers all topics related to the six research questions.

7 Challenges

This work analyzed 108 papers related to resource allocation in fog computing. The analysis enabled the six research questions presented to be answered, bringing to light valuable information about this research area. However, there are challenges that need more attention from the research community in the context of this work. The main identified challenges and possible future directions are detailed in this section.
Dynamicity: in a real fog infrastructure, users tend to change their computing resource requirements over time [164]. However, the user’s requests are considered static in most resource allocation proposals, meaning that an on-the-fly change in the requirements would not be possible. Updating of requirements after request fulfilment is a research gap declared by RQ1. A possible strategy is to verify if the new requirements can be fulfilled by the allocated resources. If they can not, the resource allocation process can be executed again, verifying if it is better to offload/migrate the service (e.g., to a powerful node or to the cloud) or to deploy it in the new place, depending on the predicted deployment time. Another possibility is the use of load balancing to permit a parallel processing and a better user experience even when there is not a powerful node available [55]. Predictive solutions, using machine learning techniques added to artificial intelligence methods seem to be a viable solution;
Mobility: Most of the analyzed papers assume that the fog nodes are fixed, but it is not always true. Mobility must be considered not only in IoT devices, but also in fog nodes, hindering the discovery, allocation, monitoring, and orchestration steps;
Privacy and security: In the context of resource allocation for fog computing, these two interrelated issues are challenging and need more research [18, 37, 164]. When accepting a user’s request, the management service in place must be in a position to validate whether the request is genuine and comes from a trusted user, before passing it to resource management [37]. However, there is a lack of efficient techniques and tools for assuring privacy and preventing attacks in resource-restricted devices [18];
Absence of use of unikernel: In response to RQ4, it was noted that the main model of virtualization used remains virtual machines, mainly due to the wide adherence and ease with which this model integrates with cloud computing services. However, more recent articles have given preference to the use of containers. No publication used the unikernel, which is also a virtualization method adherent to fog computing environments [19];
High heterogeneity of devices: Unlike cloud computing, which has more homogeneous and controlled equipment, fog computing can have equipment with different architectures (e.g., Raspberry Pi, Beagle board, several kinds of sensors, etc.). There is still a gap in the literature regarding proposals for resource allocation that is independent of the type of resource to be allocated, which would expand the scope of the environment in addition to meeting an essential characteristic of this paradigm;
High diversity in data: Considering the heterogeneity of the equipment, the data generated by it are also in different formats and are not standardized, making manipulation and use in applications difficult. The proposals presented do not indicate a way of dealing with these diverse data, which is a great challenge to be overcome;
Limited computational power: Regarding RQ3, which refers to the covered architecture layers, in the analyzed papers, there are few who consider that the fog nodes that will execute their proposed methods have low computational power. In this sense, many of the proposals can, in fact, only be executed on powerful servers allocated in the cloud layer, which generates a higher overhead than if this were allocated in the fog layer;
Limited connection: In the same sense, most proposals assume the existence of a high-speed connection, which may not be a reality for some equipment in the fog network and, mainly, for IoT devices that may be located in remote places, such as in cases of use in agriculture, for example;
Consider both QoS and QoE parameters: In response to RQ1, which considers the resource allocation metrics, the fog node is composed by objective attributes, such as CPU, storage, memory, and other computational capacities. These attributes are considered QoS parameters. Fog nodes are also composed of some subjective attributes, such as availability and reliability [162] and they are measured by QoE parameters. To achieve a proper fog computing management, consideration of both objective and subjective attributes is desirable, but this is still a gap in the analyzed papers;
Provider’s and User’s perspective: To perform fog computing management, two perspectives must be considered: (i) the end-user perspective; and (ii) the service provider perspective. On the one hand, end-users always want to obtain the best resources available, those with greater computational power and higher values for subjective attributes. On the other hand, service providers are interested in delivering the minimal set of resources to avoid unnecessary costs or situations involving resource unavailability for other users. There are publications that consider only the user’s perspective [77] [182] or other ones that consider only the provider’s perspective [54] [130]. Considering both perspectives, i.e., balancing the needs of end-users and providers, is still a matter that is open for investigation [194];
Multiple metrics: Although some analyzed articles considered two or even three metrics in their proposals, in a real-world environment the number of objectives to be met in the resource allocation can be greater, when considering, for example, the integration with the database, the connection with cloud providers, network and security protocols, relationship with legacy applications, and so on;
Absence of use in real scenarios: None of analyzed works has implemented the proposed resource allocation technique in a real-world use-case, as described in Section 5.5 and answered both RQ5 and RQ6, even though there are already a few experimental testbeds [157]. Nevertheless, the use of the approximation simulators, when trying to model reality, is not always sufficient to understand and predict the challenges and complexities of a real implementation [81]. In any case, the absence of fog computing services offered by public providers can still be considered as a weakness to be overcome to favor the implementation of new proposals for resource allocation.

8 Conclusion

This article provided, through a systematic literature review, an overview of the state-of-the-art in the process of computational resource allocation in fog computing, in addition to presenting the characteristics of this computational paradigm. To achieve this goal, it was necessary to first define the scope of the term resource allocation in fog computing, since there is no consensus on this among academics. An analysis of both existing surveys of resource management and resource allocation was performed. The mapping process continued with the definition of six research questions that guided the search, the selection criteria, and the evaluation of publications found.
Evaluating the 108 selected publications, it was possible to answer all the research questions. Firstly, regarding the resource allocation metrics, it was possible to identify that the cost is the most used, followed by the latency, and the execution time. It was also possible to observe the joint use of two or more metrics (i.e., reduce latency and optimize the resource utilization) and this can be seen as a good guide to achieving efficient resource allocation in new proposals.
Heuristic approaches are the most used techniques to address the resource allocation problem. However, it was also possible to observe from the research carried out, a recent growth in the use of machine learning techniques, which are effective in the resource allocation process because they rely on historical series and achieve more assertive predictions.
Most works analyzed in this article had their proposals addressed using all three layers of fog computing architecture, which is fitting when considering fog’s characteristic of being complementary to cloud. Regarding the virtualization model, virtual machines are the most preferred in the analyzed proposals, although the use of containers has grown in recent years, considering the analyzed publications. This can be seen as a good research opportunity for new resource allocation proposals.
The majority of analyzed papers used numerical simulators to validate their proposals and the absence of proposals validated in a real-world use-case is a gap to be overcome. Finally, although the adoption of use cases has little adherence among the articles analyzed, smart buildings and vehicular domains are the most used.
Thus, it was concluded that there are still many questions that need to be investigated in the academic sphere once its implementation is a reality and everyone can take advantage of the benefits of fog computing.
By presenting a systematic review of the literature specifically on resource allocation for fog computing, the present work contributes significantly to academia, providing support for researchers to direct their future works into the existing gaps yet to be explored.

Footnotes

References

[1]
Mohammad Aazam, Khaled A. Harras, and Sherali Zeadally. 2019. Fog computing for 5G tactile industrial internet of things: QoE-aware resource allocation model. IEEE Transactions on Industrial Informatics 15, 5 (2019), 3085–3092.
[2]
Mohammad Aazam and Eui-Nam Huh. 2015. Dynamic resource provisioning through fog micro datacenter. In 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops). IEEE, 105–110.
[3]
Mohammad Aazam and Eui Nam Huh. 2015. Fog computing micro datacenter based dynamic resource estimation and pricing model for IoT. Proceedings - International Conference on Advanced Information Networking and Applications, AINA 2015-April, March (2015), 687–694.
[4]
Mohammad Aazam, Marc St.-Hilaire, Chung-Horng Lung, and Ioannis Lambadaris. 2016. MeFoRE: QoE based resource estimation at Fog to enhance QoS in IoT. In 2016 23rd International Conference on Telecommunications (ICT). IEEE, 1–5.
[5]
Mohammad Aazam, Marc St.-Hilaire, Chung-Horng Lung, and Ioannis Lambadaris. 2016. PRE-Fog: IoT trace based probabilistic resource estimation at Fog. In 2016 13th IEEE Annual Consumer Communications & Networking Conference (CCNC). IEEE, 12–17.
[6]
Saeed Abapour, Morteza Nazari-Heris, Behnam Mohammadi-Ivatloo, and Mehrdad Tarafdar Hagh. 2020. Game theory approaches for the solution of power system problems: A comprehensive review. Archives of Computational Methods in Engineering 27, 1 (2020), 81–103.
[7]
Mohamed Abderrahim, Meryem Ouzzif, Karine Guillouard, Jerome Francois, and Adrien Lebre. 2017. A holistic monitoring service for fog/edge infrastructures: A foresight study. In 2017 IEEE 5th International Conference on Future Internet of Things and Cloud (FiCloud). IEEE, 337–344.
[8]
Amine Abouaomar, Soumaya Cherkaoui, Abdellatif Kobbane, and Oussama Abderrahmane Dambri. 2019. A resources representation for resource allocation in fog computing networks. In 2019 IEEE Global Communications Conference (GLOBECOM). IEEE, 1–6.
[9]
Kosrat Dlshad Ahmed, Subhi R. M. Zeebaree, et al. 2021. Resource allocation in fog computing: A review. International Journal of Science and Business 5, 2 (2021), 54–63.
[10]
Samson Busuyi Akintoye and Antoine Bagula. 2019. Improving quality-of-service in cloud/fog computing through efficient resource allocation. Sensors 19, 6 (2019), 1267.
[11]
F. Al-Doghman, Z. Chaczko, A. R. Ajayan, and R. Klempous. 2016. A review on fog computing technology. In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 001525–001530. DOI:
[12]
Derian Alencar, Cristiano Both, Rodolfo Antunes, Helder Oliveira, Eduardo Cerqueira, and Denis Rosário. 2021. Dynamic microservice allocation for virtual reality distribution with QoE support. IEEE Transactions on Network and Service Management (2021).
[13]
Aymen Abdullah Alsaffar, Hung Phuoc Pham, Choong-Seon Hong, Eui-Nam Huh, and Mohammad Aazam. 2016. An architecture of IoT service delegation and resource allocation based on collaboration between fog and cloud computing. Mobile Information Systems 2016 (2016).
[14]
Hemant Kumar Apat, Prasenjit Maiti, Punyaban Patel, et al. 2020. Review on QoS aware resource management in fog computing environment. In 2020 IEEE International Symposium on Sustainable Energy, Signal Processing and Cyber Security (iSSSC). IEEE, 1–6.
[15]
Hamid Reza Arkian, Abolfazl Diyanat, and Atefe Pourkhalili. 2017. MIST: Fog-based data analytics scheme with cost-efficient resource provisioning for IoT crowdsensing applications. Journal of Network and Computer Applications 82, August 2016 (2017), 152–165.
[16]
Hafsa Arshad, Munam Ali Shah, Hasan Ali Khattak, Zoobia Ameer, Assad Abbas, and Samee Ullah Khan. 2018. Evaluating bio-inspired optimization techniques for utility price estimation in fog computing. In 2018 IEEE International Conference on Smart Cloud (SmartCloud). IEEE, 84–89.
[17]
A. Asensio, X. Masip-Bruin, R. J. Durán, I. de Miguel, G. Ren, S. Daijavad, and A. Jukan. 2020. Designing an efficient clustering strategy for combined fog-to-cloud scenarios. Future Generation Computer Systems (2020).
[18]
Cosmin Avasalcai, Ilir Murturi, and Schahram Dustdar. 2020. Edge and fog: A survey, use cases, and future challenges. Fog Computing: Theory and Practice (2020), 43–65.
[19]
Joao Bachiega, Breno Gustavo Soares da Costa, and Aleteia P. F. Araujo. 2021. Computational perspective of the fog node. 22nd International Conference on Internet Computing & IoT (2021).
[20]
João Bachiega Jr., Breno Costa, Leonardo Carvalho, Victor Hugo Oliveira, William Santos, Maria Clícia S. de Castro, and Aleteia Araujo. 2022. From the sky to the ground: Comparing fog computing with related distributed paradigms. In Proceedings of the 12th International Conference on Cloud Computing and Services Science (CLOSER 2022). 158–169. DOI:
[21]
Brenda S. Baker. 1985. A new proof for the first-fit decreasing bin-packing algorithm. Journal of Algorithms 6, 1 (1985), 49–70.
[22]
Venkatraman Balasubramanian and Ahmed Karmouch. 2017. An infrastructure as a service for mobile ad-hoc cloud. In 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, 1–7.
[23]
Hind Bangui, Said Rakrak, Said Raghay, and Barbora Buhnova. 2018. Moving towards smart cities: A selection of middleware for fog-to-cloud services. Applied Sciences 8, 11 (2018), 2220.
[24]
Jerry Banks, John S. Carson II, Barry Nelson, et al. 2005. Discrete-Event System Simulation - Fourth edition. (2005).
[25]
Gaurav Baranwal, Ravi Yadav, and Deo Prakash Vidyarthi. 2020. QoE aware IoT application placement in fog computing using modified-topsis. Mobile Networks and Applications 25, 5 (2020), 1816–1832.
[26]
Hayat Bashir, Seonah Lee, and Kyong Hoon Kim. 2019. Resource allocation through logistic regression and multicriteria decision making method in IoT fog computing. Transactions on Emerging Telecommunications Technologies (2019), e3824.
[27]
Sudheer Kumar Battula, Saurabh Garg, Ranesh Kumar Naha, Parimala Thulasiraman, and Ruppa Thulasiram. 2019. A micro-level compensation-based cost model for resource allocation in a fog environment. Sensors 19, 13 (2019), 2954.
[28]
Julian Bellendorf and Zoltán Ádám Mann. 2020. Classification of optimization problems in fog computing. Future Generation Computer Systems 107 (2020), 158–176.
[29]
Malika Bendechache, Sergej Svorobej, Patricia Takako Endo, and Theo Lynn. 2020. Simulating resource management across the cloud-to-thing continuum: A survey and future directions. Future Internet 12, 6 (2020), 95.
[30]
Dimitri P. Bertsekas, W. W. Hager, and O. L. Mangasarian. 1998. Nonlinear Programming. Athena Scientific Belmont, MA.
[31]
Lokesh B. Bhajantri et al. 2020. A comprehensive survey on resource management in internet of things. Journal of Telecommunications and Information Technology (2020).
[32]
Fan Bi, Sebastian Stein, Enrico Gerding, Nick Jennings, and Thomas La Porta. 2019. A truthful online mechanism for resource allocation in fog computing. In PRICAI 2019: Trends in Artificial Intelligence, Abhaya C. Nayak and Alok Sharma (Eds.). Springer International Publishing, Cham, 363–376.
[33]
Flavio Bonomi, Rodolfo Milito, Jiang Zhu, and Sateesh Addepalli. 2012. Fog computing and its role in the internet of things. In Proceedings of the First Edition of the MCC Workshop on Mobile Cloud Computing (MCC’12). ACM, New York, NY, USA, 13–16. DOI:
[34]
Gilles Brassard and Paul Bratley. 1996. Fundamentals of Algorithmics. Vol. 524. Prentice Hall Englewood Cliffs.
[35]
Rajkumar Buyya and Manzur Murshed. 2002. GridSim: A toolkit for the modeling and simulation of distributed resource management and scheduling for grid computing. Concurrency and Computation: Practice and Experience 14, 13-15 (2002), 1175–1220.
[36]
Rajkumar Buyya, Rajiv Ranjan, and Rodrigo N. Calheiros. 2009. Modeling and simulation of scalable cloud computing environments and the CloudSim toolkit: Challenges and opportunities. In 2009 International Conference on High Performance Computing & Simulation. IEEE, 1–11.
[37]
G. Sai Sesha Chalapathi, Vinay Chamola, Aabhaas Vaish, and Rajkumar Buyya. 2021. Industrial internet of things (IIoT) applications of edge and fog computing: A review and future directions. Fog/Edge Computing for Security, Privacy, and Applications (2021), 293–325.
[38]
Zheng Chang, Liqing Liu, Xijuan Guo, and Quan Sheng. 2020. Dynamic resource allocation and computation offloading for IoT fog computing system. IEEE Transactions on Industrial Informatics 17, 5 (2020), 3348–3357.
[39]
X. Chen, S. Leng, K. Zhang, and K. Xiong. 2019. A machine-learning based time constrained resource allocation scheme for vehicular fog computing. China Communications 16, 11 (2019), 29–41.
[40]
Xincheng Chen, Yuchen Zhou, Bintao He, and Lu Lv. 2019. Energy-efficiency fog computing resource allocation in cyber physical internet of things systems. IET Communications (2019).
[41]
Mung Chiang, Sangtae Ha, Fulvio Risso, Tao Zhang, and I. Chih-Lin. 2017. Clarifying fog computing and networking: 10 questions and answers. IEEE Communications Magazine 55, 4 (2017), 18–20.
[42]
Derek Clements-Croome. 2011. Sustainable intelligent buildings for people: A review. Intelligent Buildings International 3, 2 (2011), 67–86.
[43]
Breno Costa, Joao Bachiega Jr., Leonardo Rebouças Carvalho, Michel Rosa, and Aleteia Araujo. 2022. Monitoring fog computing: A review, taxonomy and open challenges. Computer Networks (2022), 109189.
[44]
Breno Costa, Joao Bachiega Jr., Leonardo Rebouças de Carvalho, and Aleteia P. F. Araujo. 2022. Orchestration in fog computing: A comprehensive survey. ACM Computing Surveys (CSUR) 55, 2 (2022), 1–34.
[45]
Rodrigo A. C. da Silva and Nelson L. S. da Fonseca. 2018. Resource allocation mechanism for a fog-cloud infrastructure. In 2018 IEEE International Conference on Communications (ICC). IEEE, 1–6.
[46]
Amir Vahid Dastjerdi, H. Gupta, R. N. Calheiros, S. K. Ghosh, and Rajkumar Buyya. 2016. Fog computing: Principles, architectures, and applications. Internet of Things: Principles and Paradigms (2016), 61–75.
[47]
Soumya Kanti Datta, Rui Pedro Ferreira Da Costa, and Christian Bonnet. 2015. Resource discovery in internet of things: Current trends and future standardization aspects. In 2015 IEEE 2nd World Forum on Internet of Things (WF-IOT). IEEE, 542–547.
[48]
Jean Lucas de Souza Toniolli and Brigitte Jaumard. 2019. Resource allocation for multiple workflows in cloud-fog computing systems. In Proceedings of the 12th IEEE/ACM International Conference on Utility and Cloud Computing Companion. 77–84.
[49]
Hoang T. Dinh, Chonho Lee, Dusit Niyato, and Ping Wang. 2013. A survey of mobile cloud computing: Architecture, applications, and approaches. Wireless Communications and Mobile Computing 13, 18 (2013), 1587–1611.
[50]
Koustabh Dolui and Soumya Kanti Datta. 2017. Comparison of edge computing implementations: Fog computing, cloudlet and mobile edge computing. GIoTS 2017 - Global Internet of Things Summit, Proceedings (2017).
[51]
Ruizhong Du, Kunqi Xu, and Xiaoyan Liang. 2020. Multiattribute evaluation model based on the KSP algorithm for edge computing. IEEE Access 8 (2020), 146932–146943.
[52]
Marian-Cosmin Dumitru, Mihnea Alexandru Moisescu, and Radu Pietraru. 2021. A review of dynamic resource allocation in edge and fog oriented enterprise systems. In 2021 23rd International Conference on Control Systems and Computer Science (CSCS). IEEE, 295–301.
[53]
Agoston E. Eiben, James E. Smith, et al. 2003. Introduction to Evolutionary Computing. Vol. 53. Springer.
[54]
Subha P. Eswaran, Sridhar Sripurushottama, and Manoj Jain. 2018. Multi criteria decision making (MCDM) based spectrum moderator for fog-assisted internet of things. Procedia Computer Science 134 (2018), 399–406.
[55]
Mohammad Etemad, Mohammad Aazam, and Marc St-Hilaire. 2017. Using devs for modeling and simulating a fog computing environment. In 2017 International Conference on Computing, Networking and Communications (ICNC). IEEE, 849–854.
[56]
Masoumeh Etemadi, Mostafa Ghobaei-Arani, and Ali Shahidinejad. 2020. Resource provisioning for IoT services in the fog computing environment: An autonomic approach. Computer Communications 161 (2020), 109–131.
[57]
Muhammad Fahimullah, Shohreh Ahvar, and Maria Trocan. 2022. A review of resource management in fog computing: Machine learning perspective. arXiv preprint arXiv:2209.03066 (2022).
[58]
Qiang Fan, Jianan Bai, Hongxia Zhang, Yang Yi, and Lingjia Liu. 2020. Delay-aware resource allocation in fog-assisted IoT networks through reinforcement learning. arXiv preprint arXiv:2005.04097 (2020).
[59]
Qiang Fan, Jianan Bai, Hongxia Zhang, Yang Yi, and Lingjia Liu. 2021. Delay-aware resource allocation in fog-assisted IoT networks through reinforcement learning. IEEE Internet of Things Journal 9, 7 (2021), 5189–5199.
[60]
Yaoling Fan, Qiliang Zhu, and Yang Liu. 2018. Cloud/fog computing system architecture and key technologies for south-north water transfer project safety. Wireless Communications and Mobile Computing 2018 (2018).
[61]
Muhammad Junaid Farooq and Quanyan Zhu. 2020. QoE based revenue maximizing dynamic resource allocation and pricing for fog-enabled mission-critical IoT applications. IEEE Transactions on Mobile Computing (2020).
[62]
Stefano Forti, Marco Gaglianese, and Antonio Brogi. 2021. Lightweight self-organising distributed monitoring of Fog infrastructures. Future Generation Computer Systems 114 (2021), 605–618.
[63]
Keke Gai, Xiao Qin, and Liehuang Zhu. 2020. An energy-aware high performance task allocation strategy in heterogeneous fog computing environments. IEEE Trans. Comput. 70, 4 (2020), 626–639.
[64]
Kanika Garg, Naveen Chauhan, and Rajeev Agrawal. 2022. Optimized resource allocation for fog network using neuro-fuzzy offloading approach. Arabian Journal for Science and Engineering (2022), 1–14.
[65]
Mostafa Ghobaei-Arani, Alireza Souri, and Ali A. Rahmanian. 2019. Resource management approaches in fog computing: A comprehensive review. Journal of Grid Computing (2019), 1–42.
[66]
Sukhpal Singh Gill, Peter Garraghan, and Rajkumar Buyya. 2019. ROUTER: Fog enabled cloud based intelligent resource management approach for smart home IoT devices. Journal of Systems and Software (2019).
[67]
Sukhpal Singh Gill, Shreshth Tuli, Minxian Xu, Inderpreet Singh, Karan Vijay Singh, Dominic Lindsay, Shikhar Tuli, Daria Smirnova, Manmeet Singh, Udit Jain, Haris Pervaiz, Bhanu Sehgal, Sukhwinder Singh Kaila, Sanjay Misra, Mohammad Sadegh Aslanpour, Harshit Mehta, Vlado Stankovski, and Peter Garraghan. 2019. Transformative effects of IoT, blockchain and artificial intelligence on cloud computing: Evolution, vision, trends and open challenges. Internet of Things 8, October (2019), 100118.
[68]
Mohammad Goudarzi, Marimuthu S. Palaniswami, and Rajkumar Buyya. 2021. A distributed deep reinforcement learning technique for application placement in edge and fog computing environments. IEEE Transactions on Mobile Computing (2021).
[69]
A. S. Gowri et al. 2020. Fog resource allocation through machine learning algorithm. In Architecture and Security Issues in Fog Computing Applications. IGI Global, 1–41.
[70]
Lin Gu, Deze Zeng, Song Guo, Ahmed Barnawi, and Yong Xiang. 2015. Cost efficient resource management in fog computing supported medical cyber-physical system. IEEE Transactions on Emerging Topics in Computing 5, 1 (2015), 108–119.
[71]
Yunan Gu, Zheng Chang, Miao Pan, Lingyang Song, and Zhu Han. 2018. Joint radio and computational resource allocation in IoT fog computing. IEEE Transactions on Vehicular Technology 67, 8 (2018), 7475–7484.
[72]
Carlos Guerrero, Isaac Lera, and Carlos Juiz. 2019. Evaluation and efficiency comparison of evolutionary algorithms for service placement optimization in fog architectures. Future Generation Computer Systems 97 (2019), 131–144.
[73]
Nalan Gülpınar, Ethem Çanakoğlu, and Juergen Branke. 2018. Heuristics for the stochastic dynamic task-resource allocation problem with retry opportunities. European Journal of Operational Research 266, 1 (2018), 291–303.
[74]
Harshit Gupta, Amir Vahid Dastjerdi, Soumya K. Ghosh, and Rajkumar Buyya. 2017. iFogSim: A toolkit for modeling and simulation of resource management techniques in the internet of things, edge and fog computing environments. Software: Practice and Experience 47, 9 (2017), 1275–1296.
[75]
Pooyan Habibi, Mohammad Farhoudi, Sepehr Kazemian, Siavash Khorsandi, and Alberto Leon-Garcia. 2020. Fog computing : A comprehensive architectural survey. IEEE Access PP (2020), 1.
[76]
Mostafa Haghi Kashani, Amir Masoud Rahmani, and Nima Jafari Navimipour. 2020. Quality of service-aware approaches in fog computing. International Journal of Communication Systems 33, 8 (2020), e4340.
[77]
Vijay L. Hallappanavar and Mahantesh N. Birje. 2021. Prediction of quality of service of fog nodes for service recommendation in fog computing based on trustworthiness of users. Journal of Reliable Intelligent Environments (2021), 1–18.
[78]
Sonti Harika and B. Chaitanya Krishna. 2022. Multi-objective optimization-oriented resource allocation in the fog environment: A new hybrid approach. International Journal of Information Technology and Web Engineering (IJITWE) 17, 1 (2022), 1–25.
[79]
Syed Rizwan Hassan, Ishtiaq Ahmad, Ateeq Ur Rehman, Seada Hussen, and Habib Hamam. 2022. Design of resource-aware load allocation for heterogeneous fog computing environments. Wireless Communications and Mobile Computing 2022 (2022).
[80]
Abhishek Hazra, Mainak Adhikari, Tarachand Amgoth, and Satish Narayana Srirama. 2020. Stackelberg game for service deployment of IoT-enabled applications in 6G-aware fog networks. IEEE Internet of Things Journal 8, 7 (2020), 5185–5193.
[81]
Cheol-Ho Hong and Blesson Varghese. 2019. Resource management in fog/edge computing: A survey on architectures, infrastructure, and algorithms. ACM Computing Surveys (CSUR) 52, 5 (2019), 97.
[82]
Farhoud Hosseinpour, Ahmad Naebi, Seppo Virtanen, Tapio Pahikkala, Hannu Tenhunen, and Juha Plosila. 2021. A resource management model for distributed multi-task applications in fog computing networks. IEEE Access (2021).
[83]
Hualong Huang, Kai Peng, and Xiaolong Xu. 2020. Collaborative computation offloading for smart cities in mobile edge computing. In 2020 IEEE 13th International Conference on Cloud Computing (CLOUD). IEEE, 176–183.
[84]
Michaela Iorga, Larry Feldman, Robert Barton, Michael Martin, Nedim Goren, and Charif Mahmoudi. 2018. The NIST Definition of Fog Computing. Technical Report. National Institute of Standards and Technology.
[85]
Mir Salim Ul Islam, Ashok Kumar, and Yu-Chen Hu. 2021. Context-aware scheduling in Fog computing: A survey, taxonomy, challenges and future directions. Journal of Network and Computer Applications (2021), 103008.
[86]
Vibha Jain and Bijendra Kumar. 2022. Auction based cost-efficient resource allocation by utilizing blockchain in fog computing. Transactions on Emerging Telecommunications Technologies (2022), e4469.
[87]
Bushra Jamil, Humaira Ijaz, Mohammad Shojafar, Kashif Munir, and Rajkumar Buyya. 2022. Resource allocation and task scheduling in fog computing and internet of everything environments: A taxonomy, review, and future directions. (2022).
[88]
Gopal Chandra Jana and Sudatta Banerjee. 2017. Enhancement of QoS for fog computing model aspect of robust resource management. In 2017 International Conference on Intelligent Computing, Instrumentation and Control Technologies (ICICICT). IEEE, 1462–1466.
[89]
Amir Javadpour, Guojun Wang, and Samira Rezaei. 2020. Resource management in a peer to peer cloud network for IoT. Wireless Personal Communications 115, 3 (2020), 2471–2488.
[90]
Sakeena Javaid, Nadeem Javaid, Sahrish Khan Tayyaba, Norin Abdul Sattar, Bibi Ruqia, and Maida Zahid. 2018. Resource allocation using fog-2-cloud based environment for smart buildings. In 2018 14th International Wireless Communications & Mobile Computing Conference (IWCMC). IEEE, 1173–1177.
[91]
Boqi Jia, Honglin Hu, Yu Zeng, Tianheng Xu, and Yang Yang. 2018. Double-matching resource allocation strategy in fog computing networks based on cost efficiency. Journal of Communications and Networks 20, 3 (2018), 237–246.
[92]
Yutao Jiao, Ping Wang, Dusit Niyato, and Kongrath Suankaewmanee. 2019. Auction mechanisms in cloud/fog computing resource allocation for public blockchain networks. IEEE Transactions on Parallel and Distributed Systems (2019).
[93]
Yingmo Jie, Mingchu Li, Cheng Guo, and Ling Chen. 2019. Game-theoretic online resource allocation scheme on fog computing for mobile multimedia users. China Communications 16, 3 (2019), 22–31.
[94]
Evangelia Kalyvianaki. 2009. Resource Provisioning for Virtualized Server Applications. Technical Report. University of Cambridge, Computer Laboratory.
[95]
Naoki Katoh and Toshihide Ibaraki. 1998. Resource allocation problems. In Handbook of Combinatorial Optimization. Springer, 905–1006.
[96]
Kuljeet Kaur, Tanya Dhand, Neeraj Kumar, and Sherali Zeadally. 2017. Container-as-a-service at the edge: Trade-off between energy efficiency and service availability at fog nano data centers. IEEE Wireless Communications 24, 3 (2017), 48–56.
[97]
Kasem Khalil, Khalid Elgazzar, Mohamed Seliem, and Magdy Bayoumi. 2020. Resource discovery techniques in the internet of things: A review. Internet of Things 12 (2020), 100293.
[98]
Nosipho N. Khumalo, Olutayo O. Oyerinde, and Luzango Mfupe. 2021. Reinforcement learning-based resource management model for fog radio access network architectures in 5G. IEEE Access 9 (2021), 12706–12716.
[99]
Sungwook Kim. 2019. Novel resource allocation algorithms for the social internet of things based fog computing paradigm. Wireless Communications and Mobile Computing 2019 (2019).
[100]
Barbara Kitchenham, O. Pearl Brereton, David Budgen, Mark Turner, John Bailey, and Stephen Linkman. 2009. Systematic literature reviews in software engineering–a systematic literature review. Information and Software Technology 51, 1 (2009), 7–15.
[101]
Vrinda Kochar. 2016. Real time resource allocation on a dynamic two level symbiotic fog architecture. 2016 Sixth International Symposium on Embedded Computing and System Design (ISED) (2016), 49–55.
[102]
Ranesh Kumar, Saurabh Garg, Andrew Chan, and Sudheer Kumar. 2020. Deadline-based dynamic resource allocation and provisioning algorithms in fog-cloud environment. Future Generation Computer Systems 104 (2020), 131–141.
[103]
Yongxuan Lai, Fan Yang, Lu Zhang, and Ziyu Lin. 2018. Distributed public vehicle system based on fog nodes and vehicular sensing. IEEE Access 6 (2018), 22011–22024.
[104]
Yanwen Lan, Xiaoxiang Wang, Dongyu Wang, Zhaolin Liu, and Yibo Zhang. 2019. Task caching, offloading, and resource allocation in D2D-aided fog computing networks. IEEE Access 7 (2019), 104876–104891.
[105]
Mohammed Laroui, Boubakr Nour, Hassine Moungla, Moussa A. Cherif, Hossam Afifi, and Mohsen Guizani. 2021. Edge and fog computing for IoT: A survey on current research activities & future directions. Computer Communications (2021).
[106]
Seung-seob Lee and SuKyoung Lee. 2020. Resource allocation for vehicular fog computing using reinforcement learning combined with heuristic information. IEEE Internet of Things Journal 7, 10 (2020), 10450–10464.
[107]
Isaac Lera, Carlos Guerrero, and Carlos Juiz. 2019. Analyzing the applicability of a multi-criteria decision method in fog computing placement problem. In 2019 Fourth International Conference on Fog and Mobile Edge Computing (FMEC). IEEE, 13–20.
[108]
Chao Li, Yushu Xue, Jing Wang, Weigong Zhang, and Tao Li. 2018. Edge-oriented computing paradigms: A survey on architecture design and system management. ACM Computing Surveys (CSUR) 51, 2 (2018), 1–34.
[109]
Lei Li, Quansheng Guan, Lianwen Jin, and Mian Guo. 2019. Resource allocation and task offloading for heterogeneous real-time tasks with uncertain duration time in a fog queueing system. IEEE Access 7 (2019), 9912–9925.
[110]
Qiuping Li, Junhui Zhao, Yi Gong, and Qingmiao Zhang. 2019. Energy-efficient computation offloading and resource allocation in fog computing for internet of everything. China Communications 16, 3 (2019), 32–41.
[111]
Xi Li, Yiming Liu, Hong Ji, Heli Zhang, and Victor C. M. Leung. 2019. Optimizing resources allocation for fog computing-based internet of things networks. IEEE Access 7 (2019), 64907–64922.
[112]
Mingwei Lin, Zheyu Chen, Huchang Liao, and Zeshui Xu. 2019. ELECTRE II method to deal with probabilistic linguistic term sets and its application to edge computing. Nonlinear Dynamics 96, 3 (2019), 2125–2143.
[113]
Shuaibing Lu, Jie Wu, Yubin Duan, Ning Wang, and Juan Fang. 2020. Towards cost-efficient resource provisioning with multiple mobile users in fog computing. J. Parallel and Distrib. Comput. 146 (2020), 96–106.
[114]
Shuaibing Lu, Jie Wu, Yubin Duan, Ning Wang, and Zhiyi Fang. 2018. Cost-efficient resource provisioning in delay-sensitive cooperative fog computing. In 2018 IEEE 24th International Conference on Parallel and Distributed Systems (ICPADS). IEEE, 706–713.
[115]
Juan Luo, Luxiu Yin, Jinyu Hu, Chun Wang, Xuan Liu, Xin Fan, and Haibo Luo. 2019. Container-based fog computing architecture and energy-balancing scheduling algorithm for energy IoT. Future Generation Computer Systems 97 (2019), 50–60.
[116]
Quyuan Luo, Shihong Hu, Changle Li, Guanghui Li, and Weisong Shi. 2021. Resource scheduling in edge computing: A survey. IEEE Communications Surveys & Tutorials (2021).
[117]
Nguyen Cong Luong, Yutao Jiao, Ping Wang, Dusit Niyato, Dong In Kim, and Zhu Han. 2020. A machine-learning-based auction for resource trading in fog computing. IEEE Communications Magazine 58, 3 (2020), 82–88.
[118]
Anil Madhavapeddy and David J. Scott. 2014. Unikernels: The rise of the virtual library operating system. Commun. ACM 57, 1 (2014), 61–69.
[119]
Redowan Mahmud, Ramamohanarao Kotagiri, and Rajkumar Buyya. 2018. Fog computing: A taxonomy, survey and future directions. In Internet of Everything. Springer, 103–130.
[120]
Redowan Mahmud, Kotagiri Ramamohanarao, and Rajkumar Buyya. 2020. Application management in fog computing environments: A taxonomy, review and future directions. ACM Comput. Surv. 53, 4, Article 88 (July2020), 43 pages. DOI:
[121]
Swati Malik, Kamali Gupta, and Malvinder Singh. 2020. Resource management in fog computing using clustering techniques: A systematic study. Annals of the Romanian Society for Cell Biology (2020), 77–92.
[122]
Sathish Kumar Mani and Iyapparaja Meenakshisundaram. 2020. Improving quality-of-service in fog computing through efficient resource allocation. Computational Intelligence 36, 4 (2020), 1527–1547.
[123]
Sunilkumar S. Manvi and Gopal Krishna Shyam. 2014. Resource management for infrastructure as a service (IaaS) in cloud computing: A survey. Journal of Network and Computer Applications 41, 1 (2014), 424–440.
[124]
Spiridoula V. Margariti, Vassilios V. Dimakopoulos, and Georgios Tsoumanis. 2020. Modeling and simulation tools for fog computing–a comprehensive survey from a cost perspective. Future Internet 12, 5 (2020), 89.
[125]
Eva Marín-Tordera, Xavi Masip-Bruin, Jordi García-Almiñana, Admela Jukan, Guang-Jie Ren, and Jiafeng Zhu. 2017. Do we all really know what a fog node is? Current trends towards an open definition. Computer Communications 109 (2017), 117–130.
[126]
Xavi Masip, Eva Marín, Jordi Garcia, and Sergi Sànchez. 2020. Collaborative mechanism for hybrid fog-cloud scenarios. Fog and Fogonomics: Challenges and Practices of Fog Computing, Communication, Networking, Strategy, and Economics (2020), 7–60.
[127]
Khaled Matrouk and Kholoud Alatoun. 2021. Scheduling algorithms in fog computing: A survey. Int. J. Networked Distributed Comput. 9, 1 (2021), 59–74.
[128]
Adriana Mijuskovic, Alessandro Chiumento, Rob Bemthuis, Adina Aldea, and Paul Havinga. 2021. Resource management techniques for cloud/fog and edge computing: An evaluation framework and classification. Sensors 21, 5 (2021), 1832.
[129]
Manoj Kumar Mishra, Niranjan Kumar Ray, Amulya Ratna Swain, Ganga Bishnu Mund, and Bhabani Sankar Prasad Mishra. 2019. An adaptive model for resource selection and allocation in fog computing environment. Computers & Electrical Engineering 77 (2019), 217–229.
[130]
Suchintan Mishra, Manmath Narayan Sahoo, Sambit Bakshi, and Joel J. P. C. Rodrigues. 2020. Dynamic resource allocation in fog-cloud hybrid systems using multicriteria AHP techniques. IEEE Internet of Things Journal 7, 9 (2020), 8993–9000.
[131]
Nour Mostafa, Ismaeel Al Ridhawi, and Moayad Aloqaily. 2018. Fog resource selection using historical executions. In 2018 Third International Conference on Fog and Mobile Edge Computing (FMEC). IEEE, 272–276.
[132]
Jose Moura and David Hutchison. 2020. Fog computing systems: State of the art, research issues and future trends, with a focus on resilience. Journal of Network and Computer Applications (2020), 102784.
[133]
Mithun Mukherjee, Suman Kumar, Mohammad Shojafar, Qi Zhang, and Constandinos X. Mavromoustakis. 2020. Joint task offloading and resource allocation for delay-sensitive fog networks. ICC 2019-2019 IEEE International Conference on Communications (ICC) (2020), 1–7.
[134]
Mithun Mukherjee, Lei Shu, and Di Wang. 2018. Survey of fog computing: Fundamental, network applications, and research challenges. IEEE Communications Surveys and Tutorials 20, 3 (2018), 1826–1857.
[135]
Saad Mustafa, Babar Nazir, Amir Hayat, Atta Ur Rehman Khan, and Sajjad A. Madani. 2015. Resource management in cloud computing: Taxonomy, prospects, and challenges. Computers and Electrical Engineering 47 (2015), 186–203.
[136]
Mohammed Islam Naas, Philippe Raipin Parvedy, Jalil Boukhobza, and Laurent Lemarchand. 2017. IFogStor: An IoT data placement strategy for fog infrastructure. Proceedings - 2017 IEEE 1st International Conference on Fog and Edge Computing, ICFEC 2017May 2018 (2017), 97–104.
[137]
Ranesh Kumar Naha, Saurabh Garg, Dimitrios Georgakopoulos, Prem Prakash Jayaraman, Longxiang Gao, Yong Xiang, and Rajiv Ranjan. 2018. Fog computing: Survey of trends, architectures, requirements, and research directions. IEEE Access 6 (2018), 47980–48009.
[138]
B. V. Natesha and Ram Mohana Reddy Guddeti. 2021. Adopting elitism-based genetic algorithm for minimizing multi-objective problems of IoT service placement in fog computing environment. Journal of Network and Computer Applications 178 (2021), 102972.
[139]
Shubha Brata Nath, Harshit Gupta, Sandip Chakraborty, and Soumya K. Ghosh. 2018. A survey of fog computing and communication: Current researches and future directions. arXiv preprint arXiv:1804.04365 (2018).
[140]
Duong Tung Nguyen, Long Bao Le, and Vijay K. Bhargava. 2019. A market-based framework for multi-resource allocation in fog computing. IEEE/ACM Transactions on Networking (2019).
[141]
Quang-Hung Nguyen and Thanh-An Truong Pham. 2018. Studying and developing a resource allocation algorithm in Fog computing. In 2018 International Conference on Advanced Computing and Applications (ACOMP). IEEE, 76–82.
[142]
Lina Ni, Jinquan Zhang, and Jiguo Yu. 2016. Priced timed petri nets based resource allocation strategy for fog computing. In 2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI). IEEE, 39–44.
[143]
Pu-yan Nie and Pei-ai Zhang. 2008. A note on Stackelberg games. In 2008 Chinese Control and Decision Conference. IEEE, 1201–1203.
[144]
Sunday Oyinlola Ogundoyin and Ismaila Adeniyi Kamil. 2021. Optimization techniques and applications in fog computing: An exhaustive survey. Swarm and Evolutionary Computation 66 (2021), 100937.
[145]
OpenFog. 2017. OpenFog Consortium Architecture Working Group. (February2017), 162 pages.
[146]
Sharmila Patil-Karpe, S. H. Brahmananda, and Shrunoti Karpe. 2020. Review of resource allocation in fog computing. In Smart Intelligent Computing and Applications. Springer, 327–334.
[147]
Mugen Peng and Kecheng Zhang. 2016. Recent advances in fog radio access networks: Performance analysis and radio resource allocation. IEEE Access 4 (2016), 5003–5009.
[148]
Rickson S. Pereira, Douglas D. Lieira, Marco A. C. da Silva, Adinovam H. M. Pimenta, Joahannes B. D. da Costa, Denis Rosário, and Rodolfo I. Meneguette. 2019. A novel fog-based resource allocation policy for vehicular clouds in the highway environment. In 2019 IEEE Latin-American Conference on Communications (LATINCOM). IEEE, 1–6.
[149]
Kai Petersen, Robert Feldt, Shahid Mujtaba, and Michael Mattsson. 2008. Systematic mapping studies in software engineering. In EASE, Vol. 8. 68–77.
[150]
Morteza Rahimi, Maryam Songhorabadi, and Mostafa Haghi Kashani. 2020. Fog-based smart homes: A systematic review. Journal of Network and Computer Applications 153 (2020), 102531.
[151]
Satyakam Rahul and Rajni Aron. 2021. Fog computing architecture, application and resource allocation: A review. CEUR Workshop Proceedings 2889 (2021), 31–42.
[152]
G. Rakshith, M. V. Rahul, G. S. Sanjay, B. V. Natesha, and G. Ram Mohana Reddy. 2018. Resource provisioning framework for IoT applications in fog computing environment. In 2018 IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS). IEEE, 1–6.
[153]
Pasika Ranaweera, Anca Delia Jurcut, and Madhusanka Liyanage. 2021. Survey on multi-access edge computing security and privacy. IEEE Communications Surveys & Tutorials 23, 2 (2021), 1078–1124.
[154]
Partha Pratim Ray. 2017. An introduction to dew computing: Definition, concept and implications. IEEE Access 6 (2017), 723–737.
[155]
D. Arunkumar Reddy and P. Venkata Krishna. 2020. Feedback-based fuzzy resource management in IoT using fog computing. Evolutionary Intelligence (2020), 1–13.
[156]
Anees Ur Rehman, Zulfiqar Ahmad, Ali Imran Jehangiri, Mohammed Alaa Ala’Anzy, Mohamed Othman, Arif Iqbal Umar, and Jamil Ahmad. 2020. Dynamic energy efficient resource allocation strategy for load balancing in fog environment. IEEE Access 8 (2020), 199829–199839.
[157]
Farah Aït Salaht, Frédéric Desprez, and Adrien Lebre. 2020. An overview of service placement problem in fog and edge computing. ACM Computing Surveys (CSUR) 53, 3 (2020), 1–35.
[158]
José Santos, Tim Wauters, Bruno Volckaert, and Filip De Turck. 2018. Towards dynamic fog resource provisioning for smart city applications. In 2018 14th International Conference on Network and Service Management (CNSM). IEEE, 290–294.
[159]
José Santos, Tim Wauters, Bruno Volckaert, and Filip De Turck. 2019. Resource provisioning in Fog computing: From theory to practice. Sensors 19, 10 (2019), 2238.
[160]
José Santos, Tim Wauters, Bruno Volckaert, and Filip De Turck. 2019. Towards network-aware resource provisioning in Kubernetes for fog computing applications. In 2019 IEEE Conference on Network Softwarization (NetSoft). IEEE, 351–359.
[161]
José Santos, Tim Wauters, Bruno Volckaert, and Filip De Turck. 2021. Towards end-to-end resource provisioning in Fog Computing over Low Power Wide Area Networks. Journal of Network and Computer Applications 175 (2021), 102915.
[162]
Souvik Sengupta. 2020. Adaptive learning-based resource management strategy in fog-to-cloud. (2020).
[163]
Lawrence F. Shampine and Mark W. Reichelt. 1997. The MATLAB ODE Suite. SIAM Journal on Scientific Computing 18, 1 (1997), 1–22.
[164]
Nafiseh Sharghivand, Farnaz Derakhshan, and Nazli Siasi. 2021. A comprehensive survey on auction mechanism design for cloud/edge resource management and pricing. IEEE Access (2021).
[165]
V. Sindhu and M. Prakash. 2019. A survey on task scheduling and resource allocation methods in fog based IoT applications. In International Conference on Communication and Intelligent Systems. Springer, 89–97.
[166]
Sukhpal Singh and Inderveer Chana. 2016. Cloud resource provisioning: Survey, status and future research directions. Knowledge and Information Systems 49, 3 (2016), 1005–1069.
[167]
Olena Skarlat, Matteo Nardelli, Stefan Schulte, and Schahram Dustdar. 2017. Towards QoS-aware fog service placement. In 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC). IEEE, 89–96.
[168]
Olena Skarlat, Stefan Schulte, Michael Borkowski, and Philipp Leitner. 2016. Resource provisioning for IoT services in the fog. In 2016 IEEE 9th International Conference on Service-oriented Computing and Applications (SOCA). IEEE, 32–39.
[169]
Vitor Barbosa Souza, Xavier Masip-Bruin, Eva Marín-Tordera, Sergio Sànchez-López, Jordi Garcia, Guang-Jie Ren, Admela Jukan, and A. Juan Ferrer. 2018. Towards a proper service placement in combined Fog-to-Cloud (F2C) architectures. Future Generation Computer Systems 87 (2018), 1–15.
[170]
Vitor Barbosa C. Souza, Wilson Ramírez, Xavier Masip-Bruin, Eva Marín-Tordera, G. Ren, and Ghazal Tashakor. 2016. Handling service allocation in combined fog-cloud scenarios. In 2016 IEEE International Conference on Communications (ICC). IEEE, 1–5.
[171]
M. Sudhakara, K. Dinesh Kumar, Ravi Kumar Poluru, R. Lokesh Kumar, and S. Bharath Bhushan. 2020. Towards efficient resource management in fog computing: A survey and future directions. In Architecture and Security Issues in Fog Computing Applications. IGI Global, 158–182.
[172]
Huaiying Sun, Huiqun Yu, Guisheng Fan, and Liqiong Chen. 2019. Energy and time efficient task offloading and resource allocation on the generic IoT-fog-cloud architecture. Peer-to-Peer Networking and Applications (2019), 1–16.
[173]
Sergej Svorobej, Patricia Takako Endo, Malika Bendechache, Christos Filelis-Papadopoulos, Konstantinos M. Giannoutakis, George A. Gravvanis, Dimitrios Tzovaras, James Byrne, and Theo Lynn. 2019. Simulating fog and edge computing scenarios: An overview and research challenges. Future Internet 11, 3 (2019), 55.
[174]
Uma Tadakamalla and Daniel A. Menascé. 2019. Autonomic resource management using analytic models for fog/cloud computing. In 2019 IEEE International Conference on Fog Computing (ICFC). IEEE, 69–79.
[175]
Mohit Taneja and Alan Davy. 2017. Resource aware placement of IoT application modules in Fog-Cloud Computing Paradigm. In 2017 IFIP/IEEE Symposium on Integrated Network and Service Management (IM). IEEE, 1222–1228.
[176]
Bo Tang, Zhen Chen, Gerald Hefferman, Tao Wei, Haibo He, and Qing Yang. 2015. A hierarchical distributed fog computing architecture for big data analysis in smart cities. Proceedings of the ASE BigData & SocialInformatics 2015October (2015), 28.
[177]
Chaogang Tang, Chunsheng Zhu, Xianglin Wei, Huaming Wu, Qing Li, and Joel J. P. C. Rodrigues. 2020. Intelligent resource allocation for utility optimization in RSU-empowered vehicular network. IEEE Access 8 (2020), 94453–94462.
[178]
Klervie Toczé and Simin Nadjm-Tehrani. 2018. A taxonomy for management and optimization of multiple resources in edge computing. Wireless Communications and Mobile Computing 2018 (2018).
[179]
Shiyuan Tong, Yun Liu, Mohamed Cheriet, Michel Kadoch, and Bo Shen. 2020. UCAA: User-centric user association and resource allocation in fog computing networks. IEEE Access 8 (2020), 10671–10685.
[180]
Adel Nadjaran Toosi, Redowan Mahmud, Qinghua Chi, and Rajkumar Buyya. 2019. Management and orchestration of network slices in 5G, fog, edge, and clouds. Fog and Edge Computing: Principles and Paradigms 8 (2019), 79–96.
[181]
Luis M. Vaquero and Luis Rodero-Merino. 2014. Finding your way in the fog: Towards a comprehensive definition of fog computing. SIGCOMM Comput. Commun. Rev. 44, 5 (Oct.2014), 27–32. DOI:
[182]
Shefali Varshney, Rajinder Sandhu, and P. K. Gupta. 2020. QoE-based multi-criteria decision making for resource provisioning in fog computing using AHP technique. International Journal of Knowledge and Systems Science (IJKSS) 11, 4 (2020), 17–30.
[183]
Thai T. Vu, Diep N. Nguyen, Dinh Thai Hoang, and Eryk Dutkiewicz. 2019. QoS-aware fog computing resource allocation using feasibility-finding benders decomposition. In 2019 IEEE Global Communications Conference (GLOBECOM). IEEE, 1–6.
[184]
Haoyu Wang, Lina Wang, Zhichao Zhou, Xueqiang Tao, Giovanni Pau, and Fabio Arena. 2019. Blockchain-based resource allocation model in fog computing. Applied Sciences 9, 24 (2019), 5538.
[185]
Tian Wang, Yuzhu Liang, Weijia Jia, Muhammad Arif, Anfeng Liu, and Mande Xie. 2019. Coupling resource management based on fog computing in smart city systems. Journal of Network and Computer Applications 135 (2019), 11–19.
[186]
Chu-ge Wu, Wei Li, Ling Wang, and Albert Y. Zomaya. 2021. An evolutionary fuzzy scheduler for multi-objective resource allocation in fog computing. Future Generation Computer Systems 117 (2021), 498–509.
[187]
Tiago C. S. Xavier, Igor L. Santos, Flavia C. Delicato, Paulo F. Pires, Marcelo P. Alves, Tiago S. Calmon, Ana C. Oliveira, and Claudio L. Amorim. 2020. Collaborative resource allocation for cloud of things systems. Journal of Network and Computer Applications 159 (2020), 102592.
[188]
Liangliang Yan, Min Zhang, Chuang Song, Danshi Wang, Jin Li, and Luyao Guan. 2019. Deep learning-based containerization resource management in vehicular fog computing. In Asia Communications and Photonics Conference. Optical Society of America, M4A–213.
[189]
Lichao Yang, Ming Li, Heli Zhang, Hong Ji, Mingyan Xiao, and Xi Li. 2020. Distributed resource management for blockchain in fog-enabled IoT networks. IEEE Internet of Things Journal 8, 4 (2020), 2330–2341.
[190]
Jingjing Yao and Nirwan Ansari. 2019. Fog resource provisioning in reliability-aware IoT networks. IEEE Internet of Things Journal 6, 5 (2019), 8262–8269.
[191]
Shanhe Yi, Cheng Li, and Qun Li. 2015. A survey of fog computing. Proceedings of the 2015 Workshop on Mobile Big Data - Mobidata’15 (2015), 37–42. http://dl.acm.org/citation.cfm?doid=2757384.2757397.
[192]
Chao Yin, Tongfang Li, Xiaoping Qu, and Sihao Yuan. 2020. An optimization method for resource allocation in fog computing. In 2020 International Conferences on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData) and IEEE Congress on Cybermatics (Cybermatics). IEEE, 821–828.
[193]
Luxiu Yin, Juan Luo, and Haibo Luo. 2018. Tasks scheduling and resource allocation in fog computing based on containers for smart manufacturing. IEEE Transactions on Industrial Informatics 14, 10 (2018), 4712–4721.
[194]
Ashkan Yousefpour, Caleb Fung, Tam Nguyen, Krishna Kadiyala, Fatemeh Jalali, Amirreza Niakanlahiji, Jian Kong, and Jason P. Jue. 2019. All one needs to know about fog computing and related edge computing paradigms: A complete survey. Journal of Systems ArchitectureDecember 2018 (2019).
[195]
Y. Yu, X. Bu, K. Yang, and Z. Han. 2018. Green fog computing resource allocation using joint benders decomposition, Dinkelbach algorithm, and modified distributed inner convex approximation. In 2018 IEEE International Conference on Communications (ICC). 1–6. DOI:
[196]
Lotfi A. Zadeh. 1988. Fuzzy logic. Computer 21, 4 (1988), 83–93.
[197]
Javad Zarrin, Rui L. Aguiar, and João Paulo Barraca. 2018. Resource discovery for distributed computing systems: A comprehensive survey. Journal of Parallel and Distributed Computing 113 (2018), 127–166.
[198]
F. Zhang, Z. Tang, M. Chen, X. Zhou, and W. Jia. 2018. A dynamic resource overbooking mechanism in fog computing. In Proceedings - 15th IEEE International Conference on Mobile Ad Hoc and Sensor Systems, MASS 2018. 89–97.
[199]
Huaqing Zhang, Yong Xiao, Shengrong Bu, Dusit Niyato, F. Richard Yu, and Zhu Han. 2017. Computing resource allocation in three-tier IoT fog networks: A joint optimization approach combining Stackelberg game and matching. IEEE Internet of Things Journal 4, 5 (2017), 1204–1215.
[200]
Huaqing Zhang, Yanru Zhang, Yunan Gu, Dusit Niyato, and Zhu Han. 2017. A hierarchical game framework for resource management in fog computing. IEEE Communications Magazine 55, 8 (2017), 52–57.
[201]
Kecheng Zhang, Mugen Peng, and Yaohua Sun. 2020. Delay-optimized resource allocation in fog-based vehicular networks. IEEE Internet of Things Journal 8, 3 (2020), 1347–1357.
[202]
Lei Zhang and Jiangtao Li. 2018. Enabling robust and privacy-preserving resource allocation in fog computing. IEEE Access 6 (2018), 50384–50393.
[203]
Zhenyu Zhou, Pengju Liu, Junhao Feng, Yan Zhang, Shahid Mumtaz, and Jonathan Rodriguez. 2019. Computation resource allocation and task assignment optimization in vehicular fog computing: A contract-matching approach. IEEE Transactions on Vehicular Technology 68, 4 (2019), 3113–3125.
[204]
Yan Zhuang and Hui Zhou. 2020. A hyper-heuristic resource allocation algorithm for fog computing. In Proceedings of the 2020 the 4th International Conference on Innovation in Artificial Intelligence. 194–199.
[205]
Dennis Zill, Warren S. Wright, and Michael R. Cullen. 2011. Advanced Engineering Mathematics. Jones & Bartlett Learning.

Cited By

View all
  • (2024)Resource Allocation and Security Threat in Cloud Computing: A SurveyCGC International Journal of Contemporary Technology and Research10.46860/cgcijctr.2024.06.10.3816:2(381-387)Online publication date: 17-Sep-2024
  • (2024)Computational Intelligence for Green Cloud Computing and Digital Waste ManagementComputational Intelligence for Green Cloud Computing and Digital Waste Management10.4018/979-8-3693-1552-1.ch007(127-151)Online publication date: 27-Feb-2024
  • (2024)Reliablity and Security for Fog Computing SystemsInformation10.3390/info1506031715:6(317)Online publication date: 29-May-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 55, Issue 14s
December 2023
1355 pages
ISSN:0360-0300
EISSN:1557-7341
DOI:10.1145/3606253
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 July 2023
Online AM: 03 March 2023
Accepted: 14 February 2023
Revised: 11 February 2023
Received: 07 April 2022
Published in CSUR Volume 55, Issue 14s

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Fog computing
  2. resource allocation
  3. resource management
  4. resource provisioning

Qualifiers

  • Survey

Funding Sources

  • CAPES, a Brazilian institution

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)2,723
  • Downloads (Last 6 weeks)248
Reflects downloads up to 21 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Resource Allocation and Security Threat in Cloud Computing: A SurveyCGC International Journal of Contemporary Technology and Research10.46860/cgcijctr.2024.06.10.3816:2(381-387)Online publication date: 17-Sep-2024
  • (2024)Computational Intelligence for Green Cloud Computing and Digital Waste ManagementComputational Intelligence for Green Cloud Computing and Digital Waste Management10.4018/979-8-3693-1552-1.ch007(127-151)Online publication date: 27-Feb-2024
  • (2024)Reliablity and Security for Fog Computing SystemsInformation10.3390/info1506031715:6(317)Online publication date: 29-May-2024
  • (2024)Game Theory and Deep Learning for Predicting Demand for Future Resources Within Blockchain-NetworksIEEE Transactions on Consumer Electronics10.1109/TCE.2024.344545870:4(6997-7006)Online publication date: 1-Nov-2024
  • (2024)Comprehensive Exploration of Dynamic Resource Distribution in Cloud-Based Virtual Environments2024 International Conference on Automation and Computation (AUTOCOM)10.1109/AUTOCOM60220.2024.10486098(547-552)Online publication date: 14-Mar-2024
  • (2024)IoT APIs: Time Response Optimization in Edge Computing Data Communication for Power Phase Detection SystemE3S Web of Conferences10.1051/e3sconf/202450001013500(01013)Online publication date: 11-Mar-2024
  • (2024)Topology-aware scalable resource management in multi-hop dense networksHeliyon10.1016/j.heliyon.2024.e3749010:18(e37490)Online publication date: Sep-2024
  • (2024)MuHoWComputer Networks: The International Journal of Computer and Telecommunications Networking10.1016/j.comnet.2024.110243242:COnline publication date: 2-Jul-2024
  • (2024)Fuzzy Reinforcement Learning Algorithm for Efficient Task Scheduling in Fog-Cloud IoT-Based SystemsJournal of Grid Computing10.1007/s10723-024-09781-322:4Online publication date: 23-Sep-2024
  • (2024)Soft computing approaches for dynamic multi-objective evaluation of computational offloading: a literature reviewCluster Computing10.1007/s10586-024-04543-y27:9(12459-12481)Online publication date: 1-Dec-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media