[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112099932A - Optimal pricing method and system for soft-hard deadline task offloading in edge computing - Google Patents

Optimal pricing method and system for soft-hard deadline task offloading in edge computing Download PDF

Info

Publication number
CN112099932A
CN112099932A CN202010975628.9A CN202010975628A CN112099932A CN 112099932 A CN112099932 A CN 112099932A CN 202010975628 A CN202010975628 A CN 202010975628A CN 112099932 A CN112099932 A CN 112099932A
Authority
CN
China
Prior art keywords
tasks
deadline
edge
soft
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010975628.9A
Other languages
Chinese (zh)
Inventor
米顿
吕运容
孙志宏
郭棉
段志宏
张清华
李伟明
傅树霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Petrochemical Technology
Original Assignee
Guangdong University of Petrochemical Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Petrochemical Technology filed Critical Guangdong University of Petrochemical Technology
Priority to CN202010975628.9A priority Critical patent/CN112099932A/en
Publication of CN112099932A publication Critical patent/CN112099932A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0283Price estimation or determination

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Mathematical Physics (AREA)
  • Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了边缘计算中软‑硬截止期任务卸载的最佳定价方法及系统,该方法包括如下步骤:设计边缘‑云计算系统模型,并将系统中的任务划分为高优先级和低优先级任务;建立队列模型,分别对边缘服务器和云服务器建立队列模型;其中对边缘服务器建立队列模型为M|M|1队列,对云服务器建立队列模型为M|M|∞队列;并计算硬截止期任务和软截止期任务在边缘服务器和云服务器中的响应时间;分别计算边缘服务器和云服务器的成本;分别计算边缘服务器和云服务器的收益;采用Wardrop来对系统的成本建模;采用纳什平衡对系统收益建模;评估定价策略的效率。本发明能找到一种均衡成本最小化和收益最大化的定价策略。

Figure 202010975628

The invention discloses an optimal pricing method and system for task offloading with soft-hard deadlines in edge computing. The method includes the following steps: designing an edge-cloud computing system model, and dividing tasks in the system into high priority and low priority Task; establish a queue model, and establish a queue model for edge servers and cloud servers respectively; the queue model for edge servers is established as M|M|1 queue, and the queue model for cloud servers is established as M|M|∞ queue; and calculate the hard cutoff Response time of deadline tasks and soft deadline tasks in edge server and cloud server; calculate the cost of edge server and cloud server respectively; calculate the benefit of edge server and cloud server separately; use Wardrop to model the cost of the system; use Nash Equilibrium models system returns; assesses the efficiency of pricing strategies. The present invention can find a pricing strategy that balances cost minimization and benefit maximization.

Figure 202010975628

Description

边缘计算中软-硬截止期任务卸载的最佳定价方法及系统Optimal pricing method and system for task offloading with soft-hard deadlines in edge computing

技术领域technical field

本发明涉及一种边缘计算中软-硬截止期任务卸载的最佳定价方法及系统,属于边缘计算技术领域。The invention relates to an optimal pricing method and system for unloading soft-hard deadline tasks in edge computing, and belongs to the technical field of edge computing.

背景技术Background technique

边缘计算起源于传媒领域,指在靠近物或数据源头的一侧,采用网络、计算、存储、应用核心能力为一体的开放平台,就近提供最近端服务。通过将计算和存储资源扩展到资源受限的最终用户,边缘计算极大地消除了最终用户与远程云数据中心之间的传输延迟瓶颈。然而与资源丰富的云服务器相比,边缘服务器在计算和资源存储方面受到极大的限制。因此,在边缘计算系统中,任务处理和资源的计算分配一直是具有挑战性的问题。Edge computing originated in the field of media, and refers to the use of an open platform that integrates network, computing, storage, and application core capabilities on the side close to the source of objects or data to provide the most recent services nearby. By extending computing and storage resources to resource-constrained end users, edge computing greatly eliminates the transmission latency bottleneck between end users and remote cloud data centers. However, compared with resource-rich cloud servers, edge servers are greatly limited in terms of computing and resource storage. Therefore, task processing and computing allocation of resources have always been challenging problems in edge computing systems.

在边缘计算中,根据时效性约束,对延迟敏感任务可以分为:硬截止期任务和软截止期任务。在通用计算环境中,许多调度程序相继被提出。这些调度程序是在各种时间和资源限制的情况下,根据错失截止日期的任务数量来发展的。通过采取任务的相似性及时特征,在有限的计算资源中如何调度这些任务是边缘计算中很有趣的研究主题。In edge computing, according to the timeliness constraints, delay-sensitive tasks can be divided into hard deadline tasks and soft deadline tasks. In general-purpose computing environments, many schedulers have been proposed one after another. These schedulers evolve based on the number of tasks that miss their deadlines under various time and resource constraints. By taking the similarity and timely characteristics of tasks, how to schedule these tasks in limited computing resources is an interesting research topic in edge computing.

参考文献1中设计一个调度策略,即通过选择最适合硬截止期和软截止期任务的位置。但是他们的硬截止期任务需要非常严格的截止日期才能在嵌入式系统中执行。边节点剩余的计算资源用来执行软截止期任务,从而导致了这些任务忍受超出其期限的最大延迟。虽然参考文献1为基于截止日期的任务卸载奠定了坚实的基础,但是边缘和云之间的交互却很少。此外,他们没有考虑计算成本。A scheduling strategy is designed in Reference 1, that is, by choosing the locations that are most suitable for tasks with hard and soft deadlines. But their hard deadline tasks require very tight deadlines to execute in embedded systems. The remaining computing resources of edge nodes are used to perform tasks with soft deadlines, causing these tasks to suffer maximum delays beyond their deadlines. While Reference 1 lays a solid foundation for deadline-based task offloading, there is little interaction between edge and cloud. Furthermore, they do not consider computational cost.

参考文献2根据任务的要求和网络的计算模型,讨论了任务的分散资源分配,他们的主要目的是为了减少系统的开销。参考文献3从微观经济学理论的定价模型和资源分配角度来研究,旨在有限的预算内优化边缘计算中终端设备任务的有限资源。Reference 2 discusses the decentralized resource allocation of tasks according to the requirements of tasks and the computational model of the network, and their main purpose is to reduce the overhead of the system. Reference 3 studies from the perspective of pricing models and resource allocation in microeconomic theory, aiming to optimize the limited resources of terminal device tasks in edge computing within a limited budget.

为了最大化网络的社会福利,参考文献4建议在基于共享经济的商业模型中使用一个平台,可以用来实时管理网络资源。该平台能最大限度地提高利润,而无需依赖任何用户关于在何处放置和处理任务的决定。To maximize the social welfare of the network, Reference 4 proposes a platform in a sharing economy based business model that can be used to manage network resources in real time. The platform maximizes profits without relying on any user decisions about where to place and process tasks.

综上所述,目前文献中主要设计定价模型,在针对特定的网络场景设计最优定价机制。这些定价模型没有明确考虑截止日期,尤其是边缘计算中硬截止期和软截止期任务。因此有必要解决现有技术存在的如下技术需求:如何为不同的截止日期任务设定价格,通过利用其自身的计算和传输能力来最大化边缘云收入。To sum up, the current literature mainly designs pricing models, and designs optimal pricing mechanisms for specific network scenarios. These pricing models do not explicitly consider deadlines, especially hard deadline and soft deadline tasks in edge computing. Therefore, it is necessary to solve the following technical needs of the existing technology: how to set prices for different deadline tasks, and maximize edge cloud revenue by utilizing its own computing and transmission capabilities.

参考文献如下:References are as follows:

[1]N.Auluck,A.Azim,and K.Fizza,“Improving the schedulability of real-time tasks using fog computing,”IEEE Transactions on ServicesComputing,pp.1–14,Sept.2019.[1] N. Auluck, A. Azim, and K. Fizza, “Improving the schedulability of real-time tasks using fog computing,” IEEE Transactions on ServicesComputing, pp. 1–14, Sept. 2019.

[2]Q.He,G.Cui,X.Zhang,F.Chen,S.Deng,H.Jin,Y.Li,and Y.Yang,“A game-theoretical approach for user allocation in edge computing environment,”IEEETrans.Parallel Distrib.Syst.,vol.31,no.3,pp.515–529,Mar.2020.[2] Q. He, G. Cui, X. Zhang, F. Chen, S. Deng, H. Jin, Y. Li, and Y. Yang, “A game-theoretical approach for user allocation in edge computing environment, ”IEEETrans.Parallel Distrib.Syst.,vol.31,no.3,pp.515–529,Mar.2020.

[3]J.Liu,S.Guo,K.Liu,and L.Feng,“Resource provision and allocationbased on microeconomic theory in mobile edge computing,”IEEETrans.Serv.Comput.,pp.1–14,June 2020,[3] J. Liu, S. Guo, K. Liu, and L. Feng, “Resource provision and allocation based on microeconomic theory in mobile edge computing,” IEEE Trans. Serv. Comput., pp. 1–14, June 2020,

[4]M.Siew,D.W.H.Cai,L.Li,and T.Q.S.Quek,“Dynamic pricing forresource-quota sharing in multi-access edge computing,”IEEETrans.Netw.Sci.Eng.,pp.1–13,June 2020。[4] M.Siew, D.W.H.Cai, L.Li, and T.Q.S.Quek, “Dynamic pricing for resource-quota sharing in multi-access edge computing,” IEEETrans.Netw.Sci.Eng., pp.1–13, June 2020 .

发明内容SUMMARY OF THE INVENTION

本发明所要解决的技术问题是克服现有技术的缺陷,提供一种边缘计算中软-硬截止期任务卸载的最佳定价方法及系统,使得边缘-云计算系统能够找到一种均衡成本最小化和收益最大化的定价策略。The technical problem to be solved by the present invention is to overcome the defects of the prior art, and to provide an optimal pricing method and system for offloading soft-hard deadline tasks in edge computing, so that the edge-cloud computing system can find an equilibrium cost minimization and Profit-maximizing pricing strategies.

本发明具体采用如下技术方案:边缘计算中软-硬截止期任务卸载的最佳定价方法,包括如下步骤:The present invention specifically adopts the following technical solutions: an optimal pricing method for task offloading with soft-hard deadlines in edge computing, comprising the following steps:

步骤SS1:设计边缘-云计算系统模型,并将系统中的任务划分为高优先级和低优先级任务;Step SS1: Design an edge-cloud computing system model, and divide tasks in the system into high-priority and low-priority tasks;

步骤SS2:建立队列模型,分别对边缘服务器和云服务器建立队列模型;其中对边缘服务器建立队列模型为M|M|1队列,对云服务器建立队列模型为M|M|∞队列;并计算硬截止期任务和软截止期任务在边缘服务器和云服务器中的响应时间;Step SS2: Establish a queue model, and establish a queue model for the edge server and the cloud server respectively; the queue model established for the edge server is M|M|1 queue, and the queue model for the cloud server is established as the M|M|∞ queue; The response time of deadline tasks and soft deadline tasks in edge servers and cloud servers;

步骤SS3:分别计算边缘服务器和云服务器的成本;Step SS3: Calculate the cost of the edge server and the cloud server respectively;

步骤SS4:分别计算边缘服务器和云服务器的收益;Step SS4: Calculate the revenue of the edge server and the cloud server respectively;

步骤SS5:采用Wardrop来对系统的成本建模;Step SS5: Use Wardrop to model the cost of the system;

步骤SS6:采用纳什平衡对系统收益建模;Step SS6: Use Nash equilibrium to model system revenue;

步骤SS7:评估定价策略的效率。Step SS7: Evaluate the efficiency of the pricing strategy.

作为一种较佳的实施例,所述步骤SS1中的系统中的任务的划分依据是任务的时间期限;高优先级任务通常需要硬截止期,而低优先级任务需要软截止期,因此任务也分成硬截止期任务和软截止期任务;假设λ为到达的任务量:As a preferred embodiment, the division of tasks in the system in step SS1 is based on the time limit of the task; high priority tasks usually require hard deadlines, while low priority tasks require soft deadlines, so the task It is also divided into hard deadline tasks and soft deadline tasks; assuming λ is the amount of tasks arriving:

λ=λHS λ= λH + λS

其中,λH为硬截止期任务的数量,λS为软截止期任务的数量。Among them, λ H is the number of hard deadline tasks, and λ S is the number of soft deadline tasks.

作为一种较佳的实施例,所述步骤SS2具体包括:As a preferred embodiment, the step SS2 specifically includes:

硬截止期任务的数量包含边缘服务器中硬截止期任务的数量

Figure BDA0002685682770000041
和云服务器中硬截止期任务的数量
Figure BDA0002685682770000042
同理,软截止期任务的数量包含边缘服务器中软截止期任务的数量
Figure BDA0002685682770000043
和云服务器中软截止期任务的数量
Figure BDA0002685682770000044
则有:The number of hard deadline tasks contains the number of hard deadline tasks in the edge server
Figure BDA0002685682770000041
and the number of hard deadline tasks in the cloud server
Figure BDA0002685682770000042
Similarly, the number of soft deadline tasks includes the number of soft deadline tasks in the edge server
Figure BDA0002685682770000043
and the number of soft deadline tasks in the cloud server
Figure BDA0002685682770000044
Then there are:

Figure BDA0002685682770000045
Figure BDA0002685682770000045

硬截止期任务在边缘服务器和云服务器上的响应时间分别为:The response times of the hard deadline tasks on the edge server and the cloud server are:

Figure BDA0002685682770000046
Figure BDA0002685682770000046

软截止期任务在边缘服务器和云服务器上的响应时间分别为:The response times of the soft deadline tasks on the edge server and the cloud server are:

Figure BDA0002685682770000047
Figure BDA0002685682770000047

其中,μE和μC分别为服务速率,即单位时间内服务器完成的任务数。Among them, μ E and μ C are respectively the service rate, that is, the number of tasks completed by the server in unit time.

作为一种较佳的实施例,所述步骤SS3具体包括:As a preferred embodiment, the step SS3 specifically includes:

边缘服务器中硬截止期任务的成本为:The cost of hard deadline tasks in edge servers is:

Figure BDA0002685682770000048
Figure BDA0002685682770000048

边缘服务器中软截止期任务成本为:The soft deadline task cost in the edge server is:

Figure BDA0002685682770000049
Figure BDA0002685682770000049

其中α为参数,用来权衡延迟和成本之间的关系,即任务处理时间越长,总花费的成本就越高;PE为边缘服务器上每一个任务的成本;where α is a parameter, which is used to balance the relationship between delay and cost, that is, the longer the task processing time, the higher the total cost; PE is the cost of each task on the edge server;

云服务器中硬截止期任务和软截止期任务的成本如下:The costs of hard deadline tasks and soft deadline tasks in the cloud server are as follows:

Figure BDA0002685682770000051
Figure BDA0002685682770000051

Figure BDA0002685682770000052
Figure BDA0002685682770000052

其中dt为将单个任务从边缘服务器转移到云服务器的平均传输延迟,PC为云服务器上每一个任务的成本。where d t is the average transmission delay of transferring a single task from the edge server to the cloud server, and PC is the cost of each task on the cloud server .

作为一种较佳的实施例,所述步骤SS4具体包括:As a preferred embodiment, the step SS4 specifically includes:

边缘服务器的收益为:The benefits of edge servers are:

Figure BDA0002685682770000053
Figure BDA0002685682770000053

云服务器的收益为:The benefits of cloud servers are:

Figure BDA0002685682770000054
Figure BDA0002685682770000054

Figure BDA0002685682770000055
为边缘服务器中硬截止期任务的数量,
Figure BDA0002685682770000056
为云服务器中硬截止期任务的数量;
Figure BDA0002685682770000057
为边缘服务器中软截止期任务的数量,
Figure BDA0002685682770000058
为云服务器中软截止期任务的数量。
Figure BDA0002685682770000055
is the number of hard deadline tasks in the edge server,
Figure BDA0002685682770000056
is the number of hard deadline tasks in the cloud server;
Figure BDA0002685682770000057
is the number of soft deadline tasks in the edge server,
Figure BDA0002685682770000058
is the number of soft deadline tasks in the cloud server.

作为一种较佳的实施例,所述步骤SS5具体包括:As a preferred embodiment, the step SS5 specifically includes:

平衡中软硬截止期任务的总成本是相等的,即:The total cost of tasks with soft and hard deadlines in equilibrium is equal, namely:

Figure BDA0002685682770000059
Figure BDA0002685682770000059

软截止期任务在总任务的比例在云服务器和边缘服务器计算总成本中扮演很重要的角色,因此写成如下:The proportion of soft deadline tasks to total tasks plays an important role in the total computing cost of cloud servers and edge servers, so it is written as follows:

Figure BDA00026856827700000510
Figure BDA00026856827700000510

其中φ为截止日期的偏移。where φ is the offset of the deadline.

作为一种较佳的实施例,所述步骤SS6具体包括:As a preferred embodiment, the step SS6 specifically includes:

定义

Figure BDA00026856827700000511
Figure BDA00026856827700000512
分别为边缘服务器和云服务器的纳什价格,其中
Figure BDA00026856827700000513
定义如下:definition
Figure BDA00026856827700000511
and
Figure BDA00026856827700000512
are the Nash prices of edge servers and cloud servers, respectively, where
Figure BDA00026856827700000513
Defined as follows:

Figure BDA00026856827700000514
Figure BDA00026856827700000514

同理,

Figure BDA0002685682770000061
定义如下:Similarly,
Figure BDA0002685682770000061
Defined as follows:

Figure BDA0002685682770000062
Figure BDA0002685682770000062

其中

Figure BDA0002685682770000063
γ=2(λ-2μE)2
Figure BDA0002685682770000064
Figure BDA0002685682770000065
in
Figure BDA0002685682770000063
γ=2(λ-2μ E ) 2 ,
Figure BDA0002685682770000064
Figure BDA0002685682770000065

作为一种较佳的实施例,所述步骤SS7具体包括:As a preferred embodiment, the step SS7 specifically includes:

为评估定价策略的效率,引进price-of-anarcy策略POA;定义社会福利函数S,In order to evaluate the efficiency of the pricing strategy, the price-of-anarcy strategy POA is introduced; the social welfare function S is defined,

Figure BDA0002685682770000066
Figure BDA0002685682770000066

其中x和y分别为硬截止期和权截止期任务;where x and y are hard deadline and weight deadline tasks, respectively;

使得社会福利函数S最小化,对上述公式进行改写成如下:To minimize the social welfare function S, the above formula can be rewritten as follows:

Figure BDA0002685682770000067
Figure BDA0002685682770000067

Figure BDA0002685682770000068
Figure BDA0002685682770000068

其中,T*为平衡状态下网络中经历的总延迟。where T * is the total delay experienced in the network in the equilibrium state.

作为一种较佳的实施例,所述步骤SS7具体还包括:POA根据T*和Smin计算:POA=T8/SminAs a preferred embodiment, the step SS7 further includes: POA is calculated according to T * and S min : POA=T 8 /S min .

本发明还提出边缘计算中软-硬截止期任务卸载的最佳定价系统,包括:The present invention also proposes an optimal pricing system for task offloading with soft-hard deadlines in edge computing, including:

任务划分模块,用于执行:设计边缘-云计算系统模型,并将系统中的任务划分为高优先级和低优先级任务;The task division module is used to perform: design the edge-cloud computing system model, and divide the tasks in the system into high-priority and low-priority tasks;

队列模型生成模块,用于执行:建立队列模型,分别对边缘服务器和云服务器建立队列模型;其中对边缘服务器建立队列模型为M|M|1队列,对云服务器建立队列模型为M|M|∞队列;并计算硬截止期任务和软截止期任务在边缘服务器和云服务器中的响应时间;The queue model generation module is used to execute: establish a queue model, and establish a queue model for the edge server and cloud server respectively; the queue model established for the edge server is M|M|1 queue, and the queue model established for the cloud server is M|M| ∞ queue; and calculate the response time of hard deadline tasks and soft deadline tasks in edge servers and cloud servers;

服务器成本计算模块,用于执行:分别计算边缘服务器和云服务器的成本;The server cost calculation module is used to perform: calculate the cost of the edge server and the cloud server separately;

服务器收益计算模块,用于执行:分别计算边缘服务器和云服务器的收益;The server revenue calculation module is used to perform: separately calculate the revenue of the edge server and the cloud server;

系统成本建模模块,用于执行:采用Wardrop来对系统的成本建模;System cost modeling module to perform: use Wardrop to model the cost of the system;

系统收益建模模块,用于执行:采用纳什平衡对系统收益建模;System benefit modeling module to perform: Modeling system benefits using Nash equilibrium;

效率评估模块,用于执行:评估定价策略的效率。Efficiency Evaluation Module for Execution: Evaluate the efficiency of pricing strategies.

本发明所达到的有益效果:本发明提出一种边缘计算中软-硬截止期任务卸载的最佳定价方法。通过设计一个定价模型来确定边缘服务器和云服务器的均衡价格,首先构建边缘-云服务器系统,划分软硬截止期任务,然后建立边缘服务器和云服务器队列模型来确定软硬截止期的响应时间,紧接着设计边缘服务器和云服务器成本和收益函数,最后采用纳什平衡和Wardrop平衡对系统的成本和收益建模,使得系统能够找到一种均衡成本最小化和收益最大化的定价策略。Beneficial effects achieved by the present invention: The present invention proposes an optimal pricing method for task offloading with soft-hard deadlines in edge computing. Determine the equilibrium price of edge servers and cloud servers by designing a pricing model, first build an edge-cloud server system, divide tasks with soft and hard deadlines, and then build edge server and cloud server queue models to determine the response time of soft and hard deadlines, Next, the cost and benefit functions of edge servers and cloud servers are designed. Finally, the Nash equilibrium and Wardrop equilibrium are used to model the cost and benefit of the system, so that the system can find a pricing strategy that balances cost minimization and revenue maximization.

附图说明Description of drawings

图1是本发明的边缘-云计算系统模型示意图;1 is a schematic diagram of an edge-cloud computing system model of the present invention;

图2是本发明的硬截止期任务和软截止期任务示意图;2 is a schematic diagram of a hard deadline task and a soft deadline task of the present invention;

图3是本发明的软硬截止期任务在边缘服务器中本地处理或卸载到云服务器中处理的示意图。FIG. 3 is a schematic diagram of the soft and hard deadline tasks of the present invention being processed locally in the edge server or offloaded to the cloud server for processing.

具体实施方式Detailed ways

下面结合附图对本发明作进一步描述。以下实施例仅用于更加清楚地说明本发明的技术方案,而不能以此来限制本发明的保护范围。The present invention will be further described below in conjunction with the accompanying drawings. The following examples are only used to illustrate the technical solutions of the present invention more clearly, and cannot be used to limit the protection scope of the present invention.

实施例1:如图1、图2和图3所示,本发明的一种边缘计算硬-软截止期任务卸载的最佳定价方法,包括以下七个步骤。Embodiment 1: As shown in FIG. 1 , FIG. 2 and FIG. 3 , an optimal pricing method for task offloading with hard-soft deadlines of edge computing of the present invention includes the following seven steps.

1)设计边缘-云计算系统模型,如图1所示,并将系统中的任务划分为高优先级和低优先级任务。1) Design the edge-cloud computing system model, as shown in Figure 1, and divide the tasks in the system into high-priority and low-priority tasks.

假设本发明的边缘-云服务器系统是时隙系统,可以根据任务优先级,分为高优先级任务和低优先级任务。又可以根据任务对时间要求不同,分为硬截止期任务和软截止期任务。其中高优先级任务对应硬截止期任务,用于保证系统的正确运行。相比于硬截止期任务,软截止期任务可以忍受更长的延迟。如附图2所示,软截止期任务比硬截止期任务在延迟上多了一个偏置。假设λ为系统到达的任务量,其由硬截止期任务量和软截止期任务量组成:Assuming that the edge-cloud server system of the present invention is a time slot system, it can be divided into high-priority tasks and low-priority tasks according to task priorities. It can also be divided into hard deadline tasks and soft deadline tasks according to different time requirements of tasks. Among them, high-priority tasks correspond to hard deadline tasks, which are used to ensure the correct operation of the system. Soft deadline tasks can tolerate longer delays than hard deadline tasks. As shown in Figure 2, the soft deadline task has an extra bias in latency than the hard deadline task. Suppose λ is the amount of tasks reached by the system, which consists of the amount of hard deadline tasks and the amount of soft deadline tasks:

λ=λHS λ= λH + λS

其中,λH为硬截止期任务的数量,λS为软截止期任务的数量。Among them, λ H is the number of hard deadline tasks, and λ S is the number of soft deadline tasks.

2)建立队列模型,分别对边缘服务器和云服务器建立队列模型,并计算硬软截止期任务分别在边缘服务器和云服务器中的响应时间。2) Establish a queue model, respectively establish a queue model for the edge server and the cloud server, and calculate the response time of the hard and soft deadline tasks in the edge server and the cloud server respectively.

为了不失一般性,本发明对边缘服务器建模为M|M|1队列,对云服务器建立队列模型为M|M|∞队列。定义μE和μC分别为服务速率,即单位时间内服务器完成的任务数。硬截止期任务的数量由边缘服务器中硬截止期任务的数量

Figure BDA0002685682770000081
和云服务器中硬截止期任务的数量组成
Figure BDA0002685682770000082
同理,软截止期任务的数量由边缘服务器中软截止期任务的数量
Figure BDA0002685682770000083
和云服务器中软截止期任务的数量组成
Figure BDA0002685682770000084
Without loss of generality, the present invention models edge servers as M|M|1 queues, and establishes queue models for cloud servers as M|M|∞ queues. Define μ E and μ C as the service rate, that is, the number of tasks completed by the server per unit time. The number of hard deadline tasks is determined by the number of hard deadline tasks in the edge server
Figure BDA0002685682770000081
and the number of hard deadline tasks in the cloud server
Figure BDA0002685682770000082
Similarly, the number of soft deadline tasks is determined by the number of soft deadline tasks in the edge server.
Figure BDA0002685682770000083
and the number of soft deadline tasks in the cloud server
Figure BDA0002685682770000084

Figure BDA0002685682770000085
Figure BDA0002685682770000085

硬截止期任务在边缘服务器和云服务器上的响应时间分别为

Figure BDA0002685682770000086
Figure BDA0002685682770000087
The response times of the hard deadline tasks on the edge server and the cloud server are respectively
Figure BDA0002685682770000086
and
Figure BDA0002685682770000087

Figure BDA0002685682770000088
Figure BDA0002685682770000088

软截止期任务在边缘服务器和云服务器上的响应时间分别为

Figure BDA0002685682770000089
Figure BDA00026856827700000810
The response times of soft deadline tasks on edge server and cloud server are respectively
Figure BDA0002685682770000089
and
Figure BDA00026856827700000810

Figure BDA00026856827700000811
Figure BDA00026856827700000811

3)分别计算边缘服务器和云服务器的成本:3) Calculate the cost of edge server and cloud server separately:

3.1边缘服务器中软硬截止期任务成本3.1 Soft and Hard Deadline Task Costs in Edge Servers

为了更好来权衡延迟和成本之间的关系,本发明引入了参数α,即任务处理时间越长,总花费的成本就越高。In order to better balance the relationship between delay and cost, the present invention introduces a parameter α, that is, the longer the task processing time, the higher the total cost.

硬截止期任务的总成本计算如下:The total cost of a hard deadline task is calculated as follows:

Figure BDA0002685682770000091
Figure BDA0002685682770000091

其中PE为边缘服务器处理一个任务所需的成本。where PE is the cost required by the edge server to process a task.

软截止期任务的总成本计算如下:The total cost of a soft deadline task is calculated as follows:

Figure BDA0002685682770000092
Figure BDA0002685682770000092

3.2云服务器中软硬截止期任务成本3.2 Task costs of soft and hard deadlines in cloud servers

硬截止期任务的总成本计算如下:The total cost of a hard deadline task is calculated as follows:

Figure BDA0002685682770000093
Figure BDA0002685682770000093

软截止期任务的总成本计算如下:The total cost of a soft deadline task is calculated as follows:

Figure BDA0002685682770000094
Figure BDA0002685682770000094

其中dt为将单个任务从边缘服务器转移到云服务器的平均传输延迟,PC为云服务器上每一个任务的成本。where d t is the average transmission delay of transferring a single task from the edge server to the cloud server, and PC is the cost of each task on the cloud server .

4)边缘服务器和云服务器的收益:4) Benefits of edge servers and cloud servers:

边缘服务器收益UE定义如下:The edge server revenue UE is defined as follows:

Figure BDA0002685682770000095
Figure BDA0002685682770000095

云服务器收益UC定义如下:Cloud server revenue U C is defined as follows:

Figure BDA0002685682770000096
Figure BDA0002685682770000096

5)采Wardrop平衡来对系统的总成本建模:5) Use Wardrop balance to model the total cost of the system:

Wardrop平衡中软硬截止期任务的总成本是相等的即The total cost of tasks with soft and hard deadlines in the Wardrop equilibrium is equal i.e.

Figure BDA0002685682770000097
Figure BDA0002685682770000097

软截止期任务在总任务的比例在云服务器和边缘服务器计算总成本中扮演很重要的角色,因此可以写成如下:The proportion of soft deadline tasks to total tasks plays an important role in the total computing cost of cloud servers and edge servers, so it can be written as follows:

Figure BDA0002685682770000101
Figure BDA0002685682770000101

其中φ为截止日期的偏移。where φ is the offset of the deadline.

6)采用纳什平衡来对系统的综总收益建模:6) Use Nash Equilibrium to model the overall return of the system:

纳什平衡来建模系统的总收益,力求系统能够得到最大的收益。我们定义

Figure BDA0002685682770000102
Figure BDA0002685682770000103
分别为边缘服务器和云服务器的纳什价格,其中
Figure BDA0002685682770000104
定义如下Nash equilibrium is used to model the total benefit of the system, and strive to get the maximum benefit for the system. we define
Figure BDA0002685682770000102
and
Figure BDA0002685682770000103
are the Nash prices of edge servers and cloud servers, respectively, where
Figure BDA0002685682770000104
Defined as follows

Figure BDA0002685682770000105
Figure BDA0002685682770000105

其中

Figure BDA0002685682770000106
γ=2(λ-2μE)2,
Figure BDA0002685682770000107
Figure BDA0002685682770000108
Figure BDA0002685682770000109
同理,
Figure BDA00026856827700001010
定义如下:in
Figure BDA0002685682770000106
γ=2(λ-2μ E ) 2 ,
Figure BDA0002685682770000107
Figure BDA0002685682770000108
and
Figure BDA0002685682770000109
Similarly,
Figure BDA00026856827700001010
Defined as follows:

Figure BDA00026856827700001011
Figure BDA00026856827700001011

7)评估定价策略的效率:7) Evaluate the efficiency of the pricing strategy:

为了更好的评估定价策略的效率,本发明引进了price-of-anarcy(POA)策略。定义了社会福利函数S,In order to better evaluate the efficiency of the pricing strategy, the present invention introduces the price-of-anarcy (POA) strategy. The social welfare function S is defined,

Figure BDA00026856827700001012
Figure BDA00026856827700001012

其中x和y分别为硬截止期和软截止期任务。where x and y are hard deadline and soft deadline tasks, respectively.

为了使得社会福利函数S最小化,对上述公式进行改写成如下:In order to minimize the social welfare function S, the above formula is rewritten as follows:

Figure BDA00026856827700001013
Figure BDA00026856827700001013

进一步的,定义平衡状态下网络中经历的总延迟如下:Further, the total delay experienced in the network in the equilibrium state is defined as follows:

Figure BDA0002685682770000111
Figure BDA0002685682770000111

最后,POA策略可以根据T*和Smin计算:POA=T*/SminFinally, the POA policy can be calculated from T * and Smin : POA=T * / Smin .

实施例2:本发明还提出边缘计算中软-硬截止期任务卸载的最佳定价系统,包括:Embodiment 2: The present invention also proposes an optimal pricing system for task offloading with soft-hard deadlines in edge computing, including:

任务划分模块,用于执行:设计边缘-云计算系统模型,并将系统中的任务划分为高优先级和低优先级任务;The task division module is used to perform: design the edge-cloud computing system model, and divide the tasks in the system into high-priority and low-priority tasks;

队列模型生成模块,用于执行:建立队列模型,分别对边缘服务器和云服务器建立队列模型;其中对边缘服务器建立队列模型为M|M|1队列,对云服务器建立队列模型为M|M|∞队列;并计算硬截止期任务和软截止期任务在边缘服务器和云服务器中的响应时间;The queue model generation module is used to execute: establish a queue model, and establish a queue model for the edge server and cloud server respectively; the queue model established for the edge server is M|M|1 queue, and the queue model established for the cloud server is M|M| ∞ queue; and calculate the response time of hard deadline tasks and soft deadline tasks in edge servers and cloud servers;

服务器成本计算模块,用于执行:分别计算边缘服务器和云服务器的成本;The server cost calculation module is used to perform: calculate the cost of the edge server and the cloud server separately;

服务器收益计算模块,用于执行:分别计算边缘服务器和云服务器的收益;The server revenue calculation module is used to perform: separately calculate the revenue of the edge server and the cloud server;

系统成本建模模块,用于执行:采用Wardrop来对系统的成本建模;System cost modeling module to perform: use Wardrop to model the cost of the system;

系统收益建模模块,用于执行:采用纳什平衡对系统收益建模;System benefit modeling module to perform: Modeling system benefits using Nash equilibrium;

效率评估模块,用于执行:评估定价策略的效率。Efficiency Evaluation Module for Execution: Evaluate the efficiency of pricing strategies.

本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by those skilled in the art, the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.

本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It will be understood that each process and/or block in the flowchart illustrations and/or block diagrams, and combinations of processes and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram. These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams. These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.

最后应当说明的是:以上实施例仅用以说明本发明的技术方案而非对其限制,尽管参照上述实施例对本发明进行了详细的说明,所属领域的普通技术人员应当理解:依然可以对本发明的具体实施方式进行修改或者等同替换,而未脱离本发明精神和范围的任何修改或者等同替换,其均应涵盖在本发明的权利要求保护范围之内。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: the present invention can still be Modifications or equivalent replacements are made to the specific embodiments of the present invention, and any modifications or equivalent replacements that do not depart from the spirit and scope of the present invention shall be included within the protection scope of the claims of the present invention.

Claims (10)

1. The optimal pricing method for soft-hard deadline task offloading in edge computing is characterized by comprising the following steps of:
step SS 1: designing an edge-cloud computing system, wherein the edge-cloud computing system comprises an edge server and a cloud server, and tasks entering the system are divided into high-priority tasks and low-priority tasks according to cut-off;
step SS 2: establishing a queue model, and respectively establishing the queue model for the edge server and the cloud server; the method comprises the following steps that a queue model is established for an edge server to be an M | M |1 queue, a queue model is established for a cloud server to be an M | M | ∞ queue, 1 represents that only 1 server exists, and infinity represents that an infinite server exists; calculating the response time of the hard deadline task and the soft deadline task in the edge server and the cloud server;
step SS 3: respectively calculating the cost of the edge server and the cost of the cloud server;
step SS 4: respectively calculating the profits of the edge server and the cloud server;
step SS 5: modeling the cost of the system using Wardrop;
step SS 6: modeling the system revenue by adopting Nash balance;
step SS 7: the efficiency of the pricing strategy is evaluated.
2. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 1, wherein the division of tasks in the system in step SS1 is based on time deadline of task; high priority tasks typically require a hard deadline and low priority tasks require a soft deadline, so the tasks are also divided into hard and soft deadline tasks; let λ be the amount of tasks reached:
λ=λHS
wherein λ isHFor the number of hard deadline tasks, λSIs the number of soft deadline tasks.
3. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 2, wherein the step SS2 specifically comprises:
the number of hard deadline tasks includes the number of hard deadline tasks in the edge server
Figure FDA0002685682760000011
And the number of hard deadline tasks in the cloud server
Figure FDA0002685682760000028
Similarly, the number of soft deadline tasks includes the number of soft deadline tasks in the edge server
Figure FDA0002685682760000029
And the number of soft deadline tasks in the cloud server
Figure FDA00026856827600000210
Then there are:
Figure FDA0002685682760000021
the response time of the hard deadline task on the edge server and the cloud server is respectively as follows:
Figure FDA0002685682760000022
the response time of the soft deadline task on the edge server and the cloud server is respectively as follows:
Figure FDA0002685682760000023
wherein, muEAnd muCThe normalized service rates of the edge server and the cloud server are respectively, that is, the number of tasks completed by the edge server and the cloud server in unit time.
4. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 3, wherein said step SS3 specifically comprises:
the cost of hard deadline tasks in edge servers is:
Figure FDA0002685682760000024
the soft deadline task cost in the edge server is:
Figure FDA0002685682760000025
where α is a parameter used to balance the relationship between delay and cost, i.e., the longer the task processing time, the higher the total cost; pECost for a single task on an edge server;
the costs of hard deadline tasks and soft deadline tasks in a cloud server are as follows:
Figure FDA0002685682760000026
Figure FDA0002685682760000027
wherein d istAverage transmission delay for transferring a single task from an edge server to a cloud server, PCThe cost of a single task on the cloud server.
5. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 4, wherein the step SS4 specifically comprises:
the edge server gains are:
Figure FDA0002685682760000031
the yield of the cloud server is as follows:
Figure FDA0002685682760000032
Figure FDA0002685682760000036
for the number of hard deadline tasks in the edge server,
Figure FDA0002685682760000037
the number of hard deadline tasks in the cloud server;
Figure FDA0002685682760000038
for the number of soft deadline tasks in the edge server,
Figure FDA0002685682760000039
the number of soft deadline tasks in the cloud server.
6. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 5, wherein the step SS5 specifically comprises:
the total cost of the soft and hard deadline tasks in the balance is equal, namely:
Figure FDA0002685682760000033
the role played by the proportion of the total tasks in the cloud server and edge server computing the total cost based on the soft deadline tasks is therefore written as follows:
Figure FDA0002685682760000034
wherein,
Figure FDA00026856827600000310
representing the cost of the soft deadline task at the edge server,
Figure FDA00026856827600000311
representing the cost of the soft deadline task at the cloud server, alpha > 0 is a parameter for balancing the relation between the delay and the cost, and lambdaSDenotes the number of soft deadline tasks to arrive at the system, λ denotes the total number of tasks to arrive at the system, is the sum of the number of soft deadline tasks and the number of hard deadline tasks, μCRepresents the service rate, mu, of the cloud serverERepresents the service rate of the edge server and phi is the offset of the expiration date.
7. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 6, wherein said step SS6 specifically comprises:
definition of
Figure FDA00026856827600000312
And
Figure FDA00026856827600000313
nash prices for edge servers and cloud servers, respectively, wherein
Figure FDA00026856827600000314
The definition is as follows:
Figure FDA0002685682760000035
in the same way, the method for preparing the composite material,
Figure FDA0002685682760000045
the definition is as follows:
Figure FDA0002685682760000041
wherein
Figure FDA0002685682760000046
γ=2(λ-2μE)2
Figure FDA0002685682760000047
Figure FDA0002685682760000048
The meaning of k.
8. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 7, wherein the step SS7 specifically comprises:
in order to evaluate the efficiency of the pricing strategy, a price-of-annular strategy POA is introduced; a social welfare function S is defined,
Figure FDA0002685682760000042
wherein x and y are hard deadline and right deadline tasks, respectively;
minimizing the social welfare function S, the above formula is rewritten as follows:
Figure FDA0002685682760000043
Figure FDA0002685682760000044
wherein, T*The meaning of d is the total delay experienced in the network in equilibrium.
9. The optimal pricing method for soft-hard deadline task offloading in edge computing according to claim 8, wherein the step SS7 further comprises: POA according to T*And SminAnd (3) calculating: POA ═ T*/Smin
10. An optimal pricing system for soft-hard deadline task offloading in edge computing, comprising:
a task partitioning module to perform: designing an edge-cloud computing system model, and dividing tasks in the system into high-priority tasks and low-priority tasks;
a queue model generation module to perform: establishing a queue model, and respectively establishing the queue model for the edge server and the cloud server; the method comprises the following steps of establishing a queue model for an edge server as an M | M |1 queue, and establishing a queue model for a cloud server as an M | M | ∞ queue; calculating response time of the hard deadline task and the soft deadline task in the edge server and the cloud server;
a server cost calculation module to perform: respectively calculating the cost of the edge server and the cost of the cloud server;
a server revenue calculation module to perform: respectively calculating the profits of the edge server and the cloud server;
a system cost modeling module to perform: modeling the cost of the system using Wardrop;
a system revenue modeling module to perform: modeling the system revenue by adopting Nash balance;
an efficiency evaluation module to perform: the efficiency of the pricing strategy is evaluated.
CN202010975628.9A 2020-09-16 2020-09-16 Optimal pricing method and system for soft-hard deadline task offloading in edge computing Pending CN112099932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010975628.9A CN112099932A (en) 2020-09-16 2020-09-16 Optimal pricing method and system for soft-hard deadline task offloading in edge computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010975628.9A CN112099932A (en) 2020-09-16 2020-09-16 Optimal pricing method and system for soft-hard deadline task offloading in edge computing

Publications (1)

Publication Number Publication Date
CN112099932A true CN112099932A (en) 2020-12-18

Family

ID=73759287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010975628.9A Pending CN112099932A (en) 2020-09-16 2020-09-16 Optimal pricing method and system for soft-hard deadline task offloading in edge computing

Country Status (1)

Country Link
CN (1) CN112099932A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238839A (en) * 2021-04-26 2021-08-10 深圳微品致远信息科技有限公司 Cloud computing based data management method and device
CN113791878A (en) * 2021-07-21 2021-12-14 南京大学 Deadline-aware distributed task offloading in edge computing

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122249A (en) * 2017-05-10 2017-09-01 重庆邮电大学 A kind of task unloading decision-making technique based on edge cloud pricing mechanism
CN110351760A (en) * 2019-07-19 2019-10-18 重庆邮电大学 A kind of mobile edge calculations system dynamic task unloading and resource allocation methods
CN110888687A (en) * 2019-09-27 2020-03-17 华北水利水电大学 Optimal contract design method for mobile edge computing task offloading based on contract design
CN110996393A (en) * 2019-12-12 2020-04-10 大连理工大学 Single-edge computing server and multi-user cooperative computing unloading and resource allocation method
CN111163521A (en) * 2020-01-16 2020-05-15 重庆邮电大学 Resource allocation method in distributed heterogeneous environment in mobile edge computing
CN111400001A (en) * 2020-03-09 2020-07-10 清华大学 Online computing task unloading scheduling method facing edge computing environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122249A (en) * 2017-05-10 2017-09-01 重庆邮电大学 A kind of task unloading decision-making technique based on edge cloud pricing mechanism
CN110351760A (en) * 2019-07-19 2019-10-18 重庆邮电大学 A kind of mobile edge calculations system dynamic task unloading and resource allocation methods
CN110888687A (en) * 2019-09-27 2020-03-17 华北水利水电大学 Optimal contract design method for mobile edge computing task offloading based on contract design
CN110996393A (en) * 2019-12-12 2020-04-10 大连理工大学 Single-edge computing server and multi-user cooperative computing unloading and resource allocation method
CN111163521A (en) * 2020-01-16 2020-05-15 重庆邮电大学 Resource allocation method in distributed heterogeneous environment in mobile edge computing
CN111400001A (en) * 2020-03-09 2020-07-10 清华大学 Online computing task unloading scheduling method facing edge computing environment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238839A (en) * 2021-04-26 2021-08-10 深圳微品致远信息科技有限公司 Cloud computing based data management method and device
CN113791878A (en) * 2021-07-21 2021-12-14 南京大学 Deadline-aware distributed task offloading in edge computing
CN113791878B (en) * 2021-07-21 2023-11-17 南京大学 Distributed task unloading method for perceiving expiration date in edge calculation

Similar Documents

Publication Publication Date Title
CN111427679B (en) Computing task scheduling method, system and device for edge computing
CN109582448B (en) Criticality and timeliness oriented edge calculation task scheduling method
Xue et al. An ACO-LB Algorithm for Task Scheduling in the Cloud Environment.
CN113138860B (en) Message queue management method and device
CN104657220A (en) Model and method for scheduling for mixed cloud based on deadline and cost constraints
Huang et al. An optimistic job scheduling strategy based on QoS for cloud computing
CN107087019A (en) A Device-Cloud Collaborative Computing Architecture and Task Scheduling Device and Method
CN107168770B (en) Low-energy-consumption cloud data center workflow scheduling and resource supply method
CN108270805B (en) Resource allocation method and device for data processing
CN109697122A (en) Task processing method, equipment and computer storage medium
CN108123998B (en) Heuristic request scheduling method for latency-sensitive applications in multi-cloud data centers
Li et al. Endpoint-flexible coflow scheduling across geo-distributed datacenters
Dubey et al. A priority based job scheduling algorithm using IBA and EASY algorithm for cloud metaschedular
CN112099932A (en) Optimal pricing method and system for soft-hard deadline task offloading in edge computing
CN117407160A (en) A hybrid deployment method of online tasks and offline tasks in edge computing scenarios
Stavrinides et al. Cost‐aware cloud bursting in a fog‐cloud environment with real‐time workflow applications
CN108563495A (en) The cloud resource queue graded dispatching system and method for data center's total management system
CN111309472A (en) Online virtual resource allocation method based on virtual machine pre-deployment
CN106951313A (en) The sub- time limit acquisition methods of Multi-workflow shared resource cooperative scheduling
Mahapatra et al. Latency-aware Internet of Things scheduling in heterogeneous fog-cloud paradigm
CN107797870A (en) A kind of cloud computing data resource dispatching method
CN108958919B (en) A Fairness Evaluation Method for Multi-DAG Task Scheduling Fees with Deadline Constraints in Cloud Computing
Han et al. An adaptive scheduling algorithm for heterogeneous Hadoop systems
CN117376423B (en) Deep learning reasoning service scheduling method, system, equipment and storage medium
Goswami et al. Deadline stringency based job scheduling in computational grid environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned

Effective date of abandoning: 20240614

AD01 Patent right deemed abandoned