[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107708135B - Resource allocation method suitable for mobile edge computing scene - Google Patents

Resource allocation method suitable for mobile edge computing scene Download PDF

Info

Publication number
CN107708135B
CN107708135B CN201710600370.2A CN201710600370A CN107708135B CN 107708135 B CN107708135 B CN 107708135B CN 201710600370 A CN201710600370 A CN 201710600370A CN 107708135 B CN107708135 B CN 107708135B
Authority
CN
China
Prior art keywords
task
base station
calculation result
time
uploading
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710600370.2A
Other languages
Chinese (zh)
Other versions
CN107708135A (en
Inventor
崔颖
郭成军
刘志
何雯
倪纯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiao Tong University
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Priority to CN201710600370.2A priority Critical patent/CN107708135B/en
Publication of CN107708135A publication Critical patent/CN107708135A/en
Application granted granted Critical
Publication of CN107708135B publication Critical patent/CN107708135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

本发明涉及一种适用于移动边缘计算场景的资源分配方法,该方法基于任务缓存和传输优化机制实现最优任务缓存和上传下载时间分配以及低复杂度次优任务缓存和上传下载时间分配,当移动设备待执行的任务的计算结果已被基站缓存时,移动设备从基站端下载该任务的计算结果,否则,移动设备将该任务上传至基站进行计算,然后从基站下载该任务的计算结果,当多个移动设备上传同一任务至基站时,基站选择信道最好的移动设备实现上传,当多个移动设备下载同一任务的计算结果时,基站用多播的方式发送一次该任务的计算结果,并使信道最差的移动设备恰好成功接收所述计算结果。与现有技术相比,本发明联合优化缓存和上传下载时间,具有节能等优点。

Figure 201710600370

The invention relates to a resource allocation method suitable for mobile edge computing scenarios. The method realizes optimal task caching and upload and download time allocation and low-complexity suboptimal task cache and upload and download time allocation based on task caching and transmission optimization mechanisms. When the calculation result of the task to be executed by the mobile device has been cached by the base station, the mobile device downloads the calculation result of the task from the base station; otherwise, the mobile device uploads the task to the base station for calculation, and then downloads the calculation result of the task from the base station, When multiple mobile devices upload the same task to the base station, the base station selects the mobile device with the best channel for uploading. When multiple mobile devices download the calculation result of the same task, the base station sends the calculation result of the task once by multicast. And make the mobile device with the worst channel just successfully receive the calculation result. Compared with the prior art, the present invention jointly optimizes the cache and upload and download time, and has the advantages of energy saving and the like.

Figure 201710600370

Description

一种适用于移动边缘计算场景的资源分配方法A resource allocation method suitable for mobile edge computing scenarios

技术领域technical field

本发明涉及无线通信技术的移动边缘计算领域,尤其是涉及一种适用于移动边缘计算场景的资源分配方法。The present invention relates to the field of mobile edge computing of wireless communication technology, in particular to a resource allocation method suitable for mobile edge computing scenarios.

背景技术Background technique

未来,很多计算密集型和延迟敏感型任务都需要在移动设备上运行,这些任务有巨大的运算量并且需要在很短的时间内完成,如虚拟现实、增强现实、实时在线游戏、实时监控、导航和超高清(UHD)视频流等。而移动设备由于资源有限,不能在很短的时间内执行完任务,且移动设备电量有限,过大的能量消耗使移动设备的待机时间更短。移动边缘计算是满足未来计算密集型和延迟敏感型任务需求的一项关键技术。移动边缘计算又称“雾计算”,它将云所具备的资源移到更接近用户的无线网络边缘(如基站和无线接入点),其中“云”指数据中心、IP骨干网络和蜂窝核心网络等提供资源的网络。移动边缘计算使移动用户能够在较近的无线接入网边缘得到IT和云计算服务,可降低服务的延迟,提升用户体验质量。In the future, many computing-intensive and latency-sensitive tasks need to be run on mobile devices, these tasks have huge computational load and need to be completed in a short time, such as virtual reality, augmented reality, real-time online games, real-time monitoring, Navigation and Ultra High Definition (UHD) video streaming and more. However, due to the limited resources of the mobile device, the task cannot be completed in a short period of time, and the power of the mobile device is limited. Excessive energy consumption shortens the standby time of the mobile device. Mobile edge computing is a key technology for future computing-intensive and latency-sensitive tasks. Mobile edge computing, also known as "fog computing", moves the resources possessed by the cloud to the edge of the wireless network (such as base stations and wireless access points) closer to the user, where "cloud" refers to data centers, IP backbone networks and cellular cores A network that provides resources, such as a network. Mobile edge computing enables mobile users to obtain IT and cloud computing services at the edge of the radio access network, which can reduce service latency and improve user experience quality.

在移动边缘计算系统中,当用户们发出的任务请求在空间域中高度集中并且在时间域中异步或同步地重复时,将计算结果存储在更靠近用户的地方(例如基站)以便在未来重复利用,可以大大减少移动设备的计算负载和延迟。Al-Shuwaili和O.Simeone在文章“Optimal resource allocation for mobile edge computing-based augmentedreality applications”中提出了一个资源分配方案,这个方案允许用户共享计算结果,并在延时和功率约束下最小化卸载所产生的总的移动能量消耗。然而,这篇文章仅仅关注了一项计算任务,而且没有考虑缓存计算结果以备将来重复使用。T.X.Tran,P.Pandey,A.Hajisami和D.Pompili在文章“Collaborative multi-bitrate video caching andprocessing in mobile-edge computing networks”中提出了在多用户移动边缘计算系统中的协作多比特率视频缓存和处理,从而最小化回程负载,但没有考虑任务执行和计算结果下载的能量损耗。因此,考虑多项任务请求,联合优化缓存和上传下载时间来设计节能的缓存辅助型移动边缘计算系统是需要进一步研究的问题。In the mobile edge computing system, when the task requests issued by users are highly concentrated in the space domain and repeated asynchronously or synchronously in the time domain, the calculation results are stored closer to the users (such as base stations) for future repetitions With this, the computational load and latency of mobile devices can be greatly reduced. Al-Shuwaili and O. Simeone in the article "Optimal resource allocation for mobile edge computing-based augmentedreality applications" propose a resource allocation scheme that allows users to share computing results and minimizes offloading under latency and power constraints The total mobile energy consumption generated. However, this article only focuses on one computational task and does not consider caching the computational results for future reuse. T.X.Tran, P.Pandey, A.Hajisami and D.Pompili proposed collaborative multi-bitrate video caching and processing in multi-user mobile edge computing systems in the article "Collaborative multi-bitrate video caching and processing in mobile-edge computing networks" processing, thereby minimizing the backhaul load, but does not account for the energy consumption of task execution and download of computational results. Therefore, considering multiple task requests and jointly optimizing cache and upload and download time to design an energy-efficient cache-assisted mobile edge computing system is a problem that needs further research.

发明内容SUMMARY OF THE INVENTION

本发明的目的就是为了克服上述现有技术存在的缺陷而提供一种适用于移动边缘计算场景的资源分配方法,在多任务请求的移动边缘系统中联合优化通信、缓存和计算资源从而达到节能目的,可设计节能的缓存辅助型移动边缘计算系统。The purpose of the present invention is to provide a resource allocation method suitable for mobile edge computing scenarios in order to overcome the above-mentioned defects of the prior art, and to jointly optimize communication, cache and computing resources in a mobile edge system requested by multiple tasks to achieve the purpose of energy saving , an energy-efficient cache-assisted mobile edge computing system can be designed.

本发明的目的可以通过以下技术方案来实现:The object of the present invention can be realized through the following technical solutions:

一种适用于移动边缘计算场景的资源分配方法,该方法基于任务缓存和传输优化机制实现最优任务缓存和上传下载时间的资源分配或低复杂度次优任务缓存和上传下载时间的资源分配;A resource allocation method suitable for mobile edge computing scenarios, the method realizes resource allocation of optimal task cache and upload and download time or resource allocation of low-complexity suboptimal task cache and upload and download time based on task cache and transmission optimization mechanism;

所述任务缓存和传输优化机制为:The task caching and transmission optimization mechanisms are:

当移动设备待执行的任务的计算结果已被基站缓存时,移动设备从基站端下载该任务的计算结果,当移动设备待执行的任务的计算结果未被基站缓存时,移动设备将该任务上传至基站进行计算,然后从基站下载该任务的计算结果,其中,当多个移动设备上传同一任务至基站时,基站选择信道最好的移动设备实现上传,当多个移动设备下载同一任务的计算结果时,基站用多播的方式发送一次该任务的计算结果,并使得信道最差的移动设备恰好成功接收所述计算结果。When the calculation result of the task to be executed by the mobile device has been cached by the base station, the mobile device downloads the calculation result of the task from the base station, and when the calculation result of the task to be executed by the mobile device is not cached by the base station, the mobile device uploads the task Go to the base station for calculation, and then download the calculation result of the task from the base station. When multiple mobile devices upload the same task to the base station, the base station selects the mobile device with the best channel to upload, and when multiple mobile devices download the calculation result of the same task When the result is obtained, the base station sends the calculation result of the task once in a multicast mode, and makes the mobile device with the worst channel just successfully receive the calculation result.

基于任务缓存和传输优化机制实现最优任务缓存和上传下载时间的资源分配具体为:The resource allocation based on task caching and transmission optimization mechanism to achieve optimal task caching and upload and download time is as follows:

构建时延保障下的系统平均总能耗最小化问题,求解获得最优分配方案。The problem of minimizing the average total energy consumption of the system under the guarantee of delay is constructed, and the optimal allocation scheme is obtained by solving the problem.

所述最优任务缓存和上传下载时间的资源分配具体包括以下步骤:The resource allocation of the optimal task cache and upload and download time specifically includes the following steps:

A1)构建时延保障下的系统平均总能耗最小化问题:A1) The problem of minimizing the average total energy consumption of the system under the guarantee of construction delay:

Figure BDA0001356983290000021
Figure BDA0001356983290000021

Figure BDA0001356983290000022
Figure BDA0001356983290000022

Figure BDA0001356983290000023
Figure BDA0001356983290000023

其中,

Figure BDA0001356983290000024
E(c,Tu(X,H),Td(X,H),X,H)为系统总能量消耗,
Figure BDA0001356983290000025
cn表示任务n的计算结果在基站的缓存情况,cn=1表示任务n的计算结果在基站被缓存,cn=0表示表示任务n的计算结果未被基站缓存,
Figure BDA00013569832900000310
表示系统中任务的集合,X表示随机的系统任务状态,H表示随机的系统信道状态,Tu(X,H)和Td(X,H)分别代表系统状态(X,H)下的各任务上传时间向量Tu和各任务下载时间向量Td,Ld,n为任务n计算结果的大小,Ld,n>0,C为基站缓存容量,tu,n为任务n的上传时间,td,n为任务n的下载时间,T为上传下载的总时间限制;in,
Figure BDA0001356983290000024
E(c, Tu (X, H), T d (X, H), X, H) is the total energy consumption of the system,
Figure BDA0001356983290000025
c n indicates that the calculation result of task n is cached in the base station, c n =1 indicates that the calculation result of task n is cached in the base station, c n =0 indicates that the calculation result of task n is not cached by the base station,
Figure BDA00013569832900000310
Represents the set of tasks in the system, X represents the random system task state, H represents the random system channel state, and T u (X, H) and T d (X, H) represent the system state (X, H) respectively. Task upload time vector Tu and each task download time vector T d , L d,n is the size of the calculation result of task n, L d,n > 0, C is the base station cache capacity, t u ,n is the upload time of task n , t d, n is the download time of task n, T is the total time limit of upload and download;

A2)构建拉格朗日松弛问题:A2) Construct the Lagrangian relaxation problem:

Figure BDA0001356983290000031
Figure BDA0001356983290000031

Figure BDA0001356983290000032
Figure BDA0001356983290000032

Figure BDA0001356983290000033
Figure BDA0001356983290000033

其中,L(c,Tu,Td,λ)为时延保障下的系统平均总能耗最小化问题的拉格朗日函数,λ为拉格朗日因子;Among them, L(c, T u , T d , λ) is the Lagrangian function of the problem of minimizing the average total energy consumption of the system under the guarantee of delay, and λ is the Lagrangian factor;

A3)构建时延保障下的系统平均总能耗最小化问题的对偶问题:A3) Construct the dual problem of minimizing the average total energy consumption of the system under the guarantee of delay:

Figure BDA0001356983290000034
Figure BDA0001356983290000034

Figure BDA0001356983290000035
Figure BDA0001356983290000035

A4)迭代求解所述对偶问题,获得最优分配方案。A4) Iteratively solve the dual problem to obtain the optimal allocation scheme.

步骤A1)中,所述系统总能量消耗定义为:In step A1), the total energy consumption of the system is defined as:

Figure BDA0001356983290000036
Figure BDA0001356983290000036

Figure BDA0001356983290000037
Figure BDA0001356983290000037

其中,Eu,n(tu,n,X,H)表示在系统状态为(X,H)时,移动设备以上传时间tu,n将任务n上传到基站的传输能量损耗,Ee,n(X)表示在基站执行任务n、获得其计算结果的能量消耗,Ed,n(td,n,X,H)表示基站以下载时间td,n发送计算结果所需要的传输能量。Among them, E u,n (t u,n ,X,H) represents the transmission energy loss of the mobile device uploading task n to the base station with the upload time t u,n when the system state is (X,H), E e ,n (X) represents the energy consumption of performing task n at the base station and obtaining its calculation result, E d,n (t d,n ,X,H) represents the transmission required by the base station to send the calculation result at the download time t d,n energy.

所述Eu,n(tu,n,X,H)定义为The E u,n (t u,n ,X,H) is defined as

Figure BDA0001356983290000038
Figure BDA0001356983290000038

其中,Hu,n为所有具有待执行任务n的移动设备与基站之间信道功率的最大值,Lu,n指任务n输入的大小,Lu,n>0,Kn(X)为具有待执行任务n的移动设备的总数,函数g(·)定义为:

Figure BDA0001356983290000039
B为带宽,n0为高斯白噪声的方差。Among them, Hu,n is the maximum value of the channel power between all mobile devices with task n to be executed and the base station, Lu,n refers to the size of the input of task n, Lu,n >0, Kn (X) is The total number of mobile devices with task n to be executed, the function g( ) is defined as:
Figure BDA0001356983290000039
B is the bandwidth, and n 0 is the variance of white Gaussian noise.

所述Ed,n(td,n,X,H)定义为The E d,n (t d,n ,X,H) is defined as

Figure BDA0001356983290000041
Figure BDA0001356983290000041

其中,Hdn为所有具有待执行任务n的移动设备与基站之间信道功率的最小值,Kn(X)为具有待执行任务n的移动设备的总数,函数g(·)定义为:

Figure BDA0001356983290000042
B为带宽,n0为高斯白噪声的方差。Wherein, H dn is the minimum value of the channel power between all mobile devices with task n to be performed and the base station, K n (X) is the total number of mobile devices with task n to be performed, and the function g( ) is defined as:
Figure BDA0001356983290000042
B is the bandwidth, and n 0 is the variance of white Gaussian noise.

所述Ee,n(X)的定义为The E e,n (X) is defined as

Figure BDA0001356983290000043
Figure BDA0001356983290000043

其中,Fb为基站频率,μ为由服务器的开关电容决定的常数因子,Le,n为任务n的工作负载,即执行任务n所需要的CPU周期数,Kn(X)为具有待执行任务n的移动设备的总数。Among them, F b is the base station frequency, μ is a constant factor determined by the switch capacitor of the server, L e,n is the workload of task n, that is, the number of CPU cycles required to execute task n, and K n (X) is the number of CPU cycles required to execute task n. The total number of mobile devices executing task n.

步骤A4)具体为:Step A4) is specifically:

A401)初始化循环次数t=1和拉格朗日乘数

Figure BDA0001356983290000044
A401) Initialize loop number t=1 and Lagrangian multiplier
Figure BDA0001356983290000044

A402)求解λt下的拉格朗日松弛问题:A402) Solve the Lagrangian relaxation problem under λ t :

首先通过求解如下背包问题,得到λt下的最优缓存方案

Figure BDA0001356983290000045
First, by solving the following knapsack problem, the optimal caching scheme under λ t is obtained
Figure BDA0001356983290000045

Figure BDA0001356983290000046
Figure BDA0001356983290000046

Figure BDA00013569832900000412
Figure BDA00013569832900000412

Figure BDA0001356983290000047
Figure BDA0001356983290000047

其中,e1,n(X,H,λ)由下式定义:where e 1,n (X,H,λ) is defined by:

e1,n(X,H,λ)=p(X,H)(Eu,n(f(X,H,Lu,n,Hu,n,λ),X,H)+Ee,n(X))+λf(X,H,Lu,n,Hu,n,λ)e 1,n (X,H,λ)=p(X,H)(E u,n (f(X,H,L u,n ,H u,n ,λ),X,H)+E e ,n (X))+λf(X,H,L u,n ,H u,n ,λ)

其中,p(X,H)表示系统状态(X,H)出现的概率,f(X,H,Lu,n,Hu,n,λ)由下式定义:Among them, p(X,H) represents the probability of the system state (X,H) appearing, and f(X,H,L u,n ,H u,n ,λ) is defined by the following formula:

Figure BDA0001356983290000048
Figure BDA0001356983290000048

其中,W(·)是朗伯函数,Kn(X)为具有待执行任务n的移动设备的总数,B为带宽,n0为高斯白噪声的方差;where W( ) is the Lambertian function, K n (X) is the total number of mobile devices with task n to be performed, B is the bandwidth, and n 0 is the variance of white Gaussian noise;

其次,根据以下公式计算λt下的最优上传时间方案

Figure BDA0001356983290000049
和最优下载时间方案
Figure BDA00013569832900000410
Second, calculate the optimal upload time scheme under λ t according to the following formula
Figure BDA0001356983290000049
and optimal download time scheme
Figure BDA00013569832900000410

Figure BDA00013569832900000411
Figure BDA00013569832900000411

Figure BDA0001356983290000051
Figure BDA0001356983290000051

A403)计算g(λt)的次梯度:A403) Calculate the subgradient of g(λ t ):

Figure BDA0001356983290000052
Figure BDA0001356983290000052

更新λt+1(X,H)=max{λt(X,H)+αts(X,H,λt),0},其中αt=(1+m)/(t+m),m是非负常数;Update λ t+1 (X,H)=max{λ t (X,H)+α t s(X,H,λ t ),0}, where α t =(1+m)/(t+m ), m is a non-negative constant;

A404)判断是否满足s(X,H,λt)<ε,ε为设置阈值,若是,则输出此时的

Figure BDA0001356983290000053
Figure BDA0001356983290000054
作为最优分配方案,终止,若否,更新迭代次数t=t+1,进入步骤A402)。A404) Determine whether s(X, H, λ t )<ε is satisfied, and ε is the set threshold, if so, output the current
Figure BDA0001356983290000053
Figure BDA0001356983290000054
As the optimal allocation scheme, terminate, if not, update the number of iterations t=t+1, and go to step A402).

基于任务缓存和传输优化机制实现低复杂度次优任务缓存和上传下载时间的资源分配具体为:Resource allocation based on task caching and transmission optimization mechanism to achieve low-complexity sub-optimal task caching and upload and download time is as follows:

假定基站不缓存所有待执行任务的结果,得到上传下载时间分配的最优解后重新优化任务缓存分配,在得到一近似任务缓存方案后,根据该近似任务缓存方案重新优化上传下载时间分配方案。Assuming that the base station does not cache the results of all tasks to be executed, the optimal solution of upload and download time allocation is obtained and then the task cache allocation is re-optimized. After obtaining an approximate task cache scheme, the upload and download time allocation scheme is re-optimized according to the approximate task cache scheme.

所述低复杂度次优任务缓存和上传下载时间的资源分配包括以下步骤:The resource allocation of the low-complexity suboptimal task cache and upload and download time includes the following steps:

B1)假设基站缓存容量C=0,在每个系统状态

Figure BDA0001356983290000055
下构建如下时间分配问题:B1) Assuming the base station buffer capacity C=0, in each system state
Figure BDA0001356983290000055
The following time allocation problem is constructed as follows:

Figure BDA0001356983290000056
Figure BDA0001356983290000056

Figure BDA00013569832900000511
Figure BDA00013569832900000511

Figure BDA0001356983290000057
Figure BDA0001356983290000057

其中,En(0,tu,n,td,n,X,H)表示对于任务n的系统能量消耗,tu,n为将任务n上传到基站的上传时间,td,n为基站发送任务n的计算结果的下载时间,X表示随机的系统任务状态,H表示随机的系统信道状态,

Figure BDA00013569832900000512
表示系统中任务的集合,T为上传下载的总时间限制;Among them, En (0,t u,n ,t d,n , X,H) represents the system energy consumption for task n, t u,n is the upload time for uploading task n to the base station, and t d,n is The download time for the base station to send the calculation result of task n, X represents the random system task state, H represents the random system channel state,
Figure BDA00013569832900000512
Represents the set of tasks in the system, T is the total time limit for uploading and downloading;

B2)求解所述时间分配问题,根据以下两个公式,利用二分法得到满足

Figure BDA0001356983290000058
的拉格朗日因子
Figure BDA00013569832900000513
B2) Solving the time allocation problem, according to the following two formulas, the dichotomy method is used to satisfy
Figure BDA0001356983290000058
Lagrangian factor
Figure BDA00013569832900000513

Figure BDA0001356983290000059
Figure BDA0001356983290000059

Figure BDA00013569832900000510
Figure BDA00013569832900000510

其中,f(X,H,Lu,n,Hu,n,λ)由下式定义:where f(X,H,L u,n ,H u,n ,λ) is defined by the following formula:

Figure BDA0001356983290000061
Figure BDA0001356983290000061

其中,p(X,H)表示系统状态(X,H)出现的概率,W(·)是朗伯函数,Kn(X)为具有待执行任务n的移动设备的总数,B为带宽,n0为高斯白噪声的方差;where p(X,H) represents the probability of the system state (X,H) occurring, W( ) is the Lambertian function, Kn(X) is the total number of mobile devices with task n to be performed, B is the bandwidth, n 0 is the variance of white Gaussian noise;

进而确定上传下载时间分配的最优解

Figure BDA0001356983290000062
Figure BDA0001356983290000063
Then determine the optimal solution of upload and download time allocation
Figure BDA0001356983290000062
and
Figure BDA0001356983290000063

Figure BDA0001356983290000064
Figure BDA0001356983290000064

B3)重新考虑基站缓存,即令C≠0,用贪心算法求得近似任务缓存方案,即求解如下背包问题的近似解

Figure BDA0001356983290000065
B3) Reconsider the base station cache, even if C≠0, use the greedy algorithm to obtain an approximate task cache solution, that is, to solve the approximate solution of the following knapsack problem
Figure BDA0001356983290000065

Figure BDA0001356983290000066
Figure BDA0001356983290000066

Figure BDA0001356983290000067
Figure BDA0001356983290000067

Figure BDA0001356983290000068
Figure BDA0001356983290000068

其中,cn表示任务n的计算结果在基站的缓存情况,cn=1表示任务n的计算结果在基站被缓存,cn=0表示表示任务n的计算结果未被基站缓存,Ld,n为任务n计算结果的大小,Ld,n>0,e1,n(X,H,λ)由下式定义:Among them, cn indicates that the calculation result of task n is cached in the base station, c n = 1 indicates that the calculation result of task n is cached in the base station, c n =0 indicates that the calculation result of task n is not cached by the base station, L d, n is the size of the calculation result of task n, L d,n >0, e 1,n (X,H,λ) is defined by the following formula:

e1,n(X,H,λ)=p(X,H)(Eu,n(f(X,H,Lu,n,Hu,n,λ),X,H)+Ee,n(X))+λf(X,H,Lu,n,Hu,n,λ);e 1,n (X,H,λ)=p(X,H)(E u,n (f(X,H,L u,n ,H u,n ,λ),X,H)+E e ,n (X))+λf(X,H,L u,n ,H u,n ,λ);

B4)基于所述近似任务缓存方案,利用以下公式获得每个系统状态

Figure BDA0001356983290000069
下、近似任务缓存方案
Figure BDA00013569832900000610
下的最优上传下载时间分配方案
Figure BDA00013569832900000611
Figure BDA00013569832900000612
B4) Based on the approximate task caching scheme, each system state is obtained using the following formula
Figure BDA0001356983290000069
Next, approximate task caching scheme
Figure BDA00013569832900000610
The optimal upload and download time allocation scheme under
Figure BDA00013569832900000611
and
Figure BDA00013569832900000612

Figure BDA00013569832900000613
Figure BDA00013569832900000613

Figure BDA00013569832900000614
Figure BDA00013569832900000614

Figure BDA00013569832900000615
Figure BDA00013569832900000615

与现有技术相比,本发明在多任务请求的移动边缘系统中联合优化通信、缓存和计算资源从而达到节能目的,具有以下有益效果:Compared with the prior art, the present invention jointly optimizes communication, cache and computing resources in a mobile edge system requested by multiple tasks to achieve the purpose of energy saving, and has the following beneficial effects:

1、上传和下载机制可以实现节能的效果。对于任意任务,当多个移动设备需要上传该任务至基站计算时,只需由一个移动设备将任务上传至基站,避免重复传输,基站选择拥有最好信道(即最大信道功率)的移动设备来上传,最大程度实现节能效果。当多个移动设备下载该任务的计算结果时,基站用多播的方式发送一次该任务的计算结果,使得所有请求该任务的移动设备成功接收,避免重复传输,可以实现节能的效果。1. The upload and download mechanism can achieve the effect of energy saving. For any task, when multiple mobile devices need to upload the task to the base station for calculation, only one mobile device needs to upload the task to the base station to avoid repeated transmission. The base station selects the mobile device with the best channel (that is, the maximum channel power) to Upload to maximize the energy saving effect. When multiple mobile devices download the calculation results of the task, the base station sends the calculation results of the task once in a multicast mode, so that all mobile devices requesting the task can receive the calculation results successfully, avoiding repeated transmission, and achieving the effect of energy saving.

2、在系统计算能力允许的条件下,最优任务缓存和上传下载时间分配方案可以实现最优性能。2. Under the condition that the system computing capacity allows, the optimal task cache and upload and download time allocation scheme can achieve optimal performance.

3、在系统计算能力受限的情况下,低复杂度次优任务缓存和上传下载时间分配方案能够以低复杂度得到较好的方案。3. In the case of limited computing power of the system, the low-complexity sub-optimal task cache and upload and download time allocation scheme can obtain a better scheme with low complexity.

4、在求解最优任务缓存和上传下载时间分配方案的步骤中,求解拉格朗日松弛问题,根据其最优解的次梯度,循环更新拉格朗日乘数,保证了方案的最优性。4. In the steps of solving the optimal task cache and uploading and downloading time allocation scheme, solve the Lagrangian relaxation problem, and update the Lagrangian multiplier cyclically according to the sub-gradient of its optimal solution to ensure the optimal scheme. sex.

5、在求解低复杂度次优任务缓存和上传下载时间分配方案时,先将基站缓存空间设置为零,得到中间变量后再重新考虑基站缓存,很大程度上降低了问题的复杂度。5. When solving the low-complexity sub-optimal task cache and upload and download time allocation scheme, first set the base station cache space to zero, and then reconsider the base station cache after obtaining the intermediate variables, which greatly reduces the complexity of the problem.

附图说明Description of drawings

图1为本发明的系统模型示意图;1 is a schematic diagram of a system model of the present invention;

图2为最优任务缓存和上传下载时间分配方案计算流程示意图;Figure 2 is a schematic diagram of the calculation flow of the optimal task cache and upload and download time allocation scheme;

图3为低复杂度次优任务缓存和上传下载时间分配方案计算流程示意图。FIG. 3 is a schematic diagram of the calculation flow of the low-complexity suboptimal task cache and upload and download time allocation scheme.

具体实施方式Detailed ways

下面结合附图和具体实施例对本发明进行详细说明。本实施例以本发明技术方案为前提进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. This embodiment is implemented on the premise of the technical solution of the present invention, and provides a detailed implementation manner and a specific operation process, but the protection scope of the present invention is not limited to the following embodiments.

本发明提供一种适用于移动边缘计算场景的资源分配方法,该方法基于任务缓存和传输优化机制实现最优任务缓存和上传下载时间的资源分配或低复杂度次优任务缓存和上传下载时间的资源分配。The present invention provides a resource allocation method suitable for mobile edge computing scenarios. The method realizes resource allocation of optimal task cache and upload and download time or low-complexity suboptimal task cache and upload and download time based on task cache and transmission optimization mechanism. Resource allocation.

所提出的任务缓存和传输优化机制具体为:当移动设备待执行的任务的计算结果已被基站缓存时,移动设备从基站端下载该任务的计算结果,当移动设备待执行的任务的计算结果未被基站缓存时,移动设备将该任务上传至基站进行计算,然后从基站下载该任务的计算结果;其中,当多个移动设备上传同一任务至基站时,基站选择信道最好的移动设备实现上传,当多个移动设备下载同一任务的计算结果时,基站用多播的方式发送一次该任务的计算结果,并使得信道最差的移动设备恰好成功接收所述计算结果。The proposed task caching and transmission optimization mechanism is specifically: when the calculation result of the task to be executed by the mobile device has been cached by the base station, the mobile device downloads the calculation result of the task from the base station, when the calculation result of the task to be executed by the mobile device is When it is not cached by the base station, the mobile device uploads the task to the base station for calculation, and then downloads the calculation result of the task from the base station; wherein, when multiple mobile devices upload the same task to the base station, the base station selects the mobile device with the best channel. Upload, when multiple mobile devices download the calculation result of the same task, the base station sends the calculation result of the task once by multicast, and makes the mobile device with the worst channel just successfully receive the calculation result.

如图1所示的系统模型中共有1个基站、5个移动设备和4个任务,基站存有任务1和任务2的计算结果,系统中有两种信道状态:优信道和差信道。具体而言,移动设备1和移动设备2都待执行任务1,并且都具有相同的信道,基站已存储任务1的计算结果,基站根据移动设备1和2的信道计算传输速率,并以该传输速率直接将任务1的计算结果用多播的方式发送给移动设备1和移动设备2;只有移动设备3待执行任务2,而基站已存储任务2的计算结果,因此基站直接将任务2的计算结果用单播的方式发送给移动设备3;移动设备4和移动设备5都待执行任务4,由于基站没有存储任务4的计算结果,需要选择一个移动设备将任务4上传至基站,而移动设备4具有最好的信道,因此由移动设备4将任务4上传至基站,之后基站计算任务4,由于移动设备5具有最差的信道,因此基站根据移动设备5的信道计算传输速率,以该速率将任务4的计算结果用多播的方式发送给移动设备4和移动设备5,以保证移动设备4和移动设备5都可以成功接收。There are 1 base station, 5 mobile devices and 4 tasks in the system model shown in Figure 1. The base station stores the calculation results of task 1 and task 2. There are two channel states in the system: excellent channel and poor channel. Specifically, both mobile device 1 and mobile device 2 are to perform task 1, and both have the same channel. The base station has stored the calculation result of task 1. The base station calculates the transmission rate according to the channels of mobile devices 1 and 2, and uses this transmission rate The rate directly sends the calculation result of task 1 to mobile device 1 and mobile device 2 by multicast; only mobile device 3 is to execute task 2, and the base station has stored the calculation result of task 2, so the base station directly sends the calculation result of task 2 The result is sent to mobile device 3 by unicast; both mobile device 4 and mobile device 5 are to execute task 4. Since the base station does not store the calculation result of task 4, it is necessary to select a mobile device to upload task 4 to the base station, and the mobile device 4 has the best channel, so the mobile device 4 uploads the task 4 to the base station, and then the base station calculates the task 4. Since the mobile device 5 has the worst channel, the base station calculates the transmission rate according to the channel of the mobile device 5, at this rate The calculation result of task 4 is sent to mobile device 4 and mobile device 5 in a multicast manner to ensure that both mobile device 4 and mobile device 5 can receive it successfully.

一、基于任务缓存和传输优化机制实现最优任务缓存和上传下载时间的资源分配具体为,构建时延保障下的系统平均总能耗最小化问题,求解获得最优分配方案,如图2所示,包括以下步骤:1. Resource allocation based on task caching and transmission optimization mechanism to achieve optimal task caching and upload and download time. Specifically, the problem of minimizing the average total energy consumption of the system under the guarantee of delay is constructed, and the optimal allocation scheme is obtained by solving it, as shown in Figure 2. shown, including the following steps:

A1)构建时延保障下的系统平均总能耗最小化问题:A1) The problem of minimizing the average total energy consumption of the system under the guarantee of construction delay:

Figure BDA0001356983290000081
Figure BDA0001356983290000081

Figure BDA0001356983290000082
Figure BDA0001356983290000082

Figure BDA0001356983290000083
Figure BDA0001356983290000083

其中,

Figure BDA0001356983290000084
E(c,Tu(X,H),Td(X,H),X,H)为系统总能量消耗,
Figure BDA0001356983290000085
cn表示任务n的计算结果在基站的缓存情况,cn=1表示任务n的计算结果在基站被缓存,cn=0表示任务n的计算结果未被基站缓存,
Figure BDA0001356983290000086
表示系统中任务的集合,X表示随机的系统任务状态,H表示随机的系统信道状态,Tu(X,H)和Td(X,H)分别代表系统状态(X,H)下的各任务上传时间向量Tu和各任务下载时间向量Td,Ld,n为任务n计算结果的大小,Ld,n>0,C为基站缓存容量,tu,n为任务n的上传时间,td,n为任务n的下载时间,T为上传下载的总时间限制。in,
Figure BDA0001356983290000084
E(c, Tu (X, H), T d (X, H), X, H) is the total energy consumption of the system,
Figure BDA0001356983290000085
c n indicates that the calculation result of task n is cached in the base station, c n =1 indicates that the calculation result of task n is cached in the base station, c n =0 indicates that the calculation result of task n is not cached by the base station,
Figure BDA0001356983290000086
Represents the set of tasks in the system, X represents the random system task state, H represents the random system channel state, and T u (X, H) and T d (X, H) represent the system state (X, H) respectively. Task upload time vector Tu and each task download time vector T d , L d,n is the size of the calculation result of task n, L d,n > 0, C is the base station cache capacity, t u ,n is the upload time of task n , t d,n is the download time of task n, and T is the total time limit for uploading and downloading.

所述系统总能量消耗定义为:The total energy consumption of the system is defined as:

Figure BDA0001356983290000087
Figure BDA0001356983290000087

Figure BDA0001356983290000088
Figure BDA0001356983290000088

其中,Eu,n(tu,n,X,H)表示在系统状态为(X,H)时,移动设备以上传时间tu,n将任务n上传到基站的传输能量损耗,Ee,n(X)表示在基站执行任务n、获得其计算结果的能量消耗,Ed,n(td,n,X,H)表示基站以下载时间td,n发送计算结果所需要的传输能量,具体地:Among them, E u,n (t u,n ,X,H) represents the transmission energy loss of the mobile device uploading task n to the base station with the upload time t u,n when the system state is (X,H), E e ,n (X) represents the energy consumption of performing task n at the base station and obtaining its calculation result, E d,n (t d,n ,X,H) represents the transmission required by the base station to send the calculation result at the download time t d,n energy, specifically:

Eu,n(tu,n,X,H)定义为E u,n (t u,n ,X,H) is defined as

Figure BDA0001356983290000091
Figure BDA0001356983290000091

其中,Hu,n为所有具有待执行任务n的移动设备与基站之间信道功率的最大值,Lu,n指任务n输入的大小,Lu,n>0,Kn(X)为具有待执行任务n的移动设备的总数,函数g(·)定义为:

Figure BDA0001356983290000092
B为带宽,n0为高斯白噪声的方差。Among them, Hu,n is the maximum value of the channel power between all mobile devices with task n to be executed and the base station, Lu,n refers to the size of the input of task n, Lu,n >0, Kn (X) is The total number of mobile devices with task n to be executed, the function g( ) is defined as:
Figure BDA0001356983290000092
B is the bandwidth, and n 0 is the variance of white Gaussian noise.

Ed,n(td,n,X,H)定义为E d,n (t d,n ,X,H) is defined as

Figure BDA0001356983290000093
Figure BDA0001356983290000093

其中,Hdn为所有具有待执行任务n的移动设备与基站之间信道功率的最小值,Kn(X)为具有待执行任务n的移动设备的总数,函数g(·)定义为:

Figure BDA0001356983290000094
B为带宽,n0为高斯白噪声的方差。Wherein, H dn is the minimum value of the channel power between all mobile devices with task n to be performed and the base station, K n (X) is the total number of mobile devices with task n to be performed, and the function g( ) is defined as:
Figure BDA0001356983290000094
B is the bandwidth, and n 0 is the variance of white Gaussian noise.

Ee,n(X)的定义为E e,n (X) is defined as

Figure BDA0001356983290000095
Figure BDA0001356983290000095

其中,Fb为基站频率,μ为由服务器的开关电容决定的常数因子,Le,n为任务n的工作负载,即执行任务n所需要的CPU周期数,Kn(X)为具有待执行任务n的移动设备的总数。Among them, F b is the base station frequency, μ is a constant factor determined by the switch capacitor of the server, L e,n is the workload of task n, that is, the number of CPU cycles required to execute task n, and K n (X) is the number of CPU cycles required to execute task n. The total number of mobile devices executing task n.

A2)构建拉格朗日松弛问题:A2) Construct the Lagrangian relaxation problem:

Figure BDA0001356983290000096
Figure BDA0001356983290000096

Figure BDA0001356983290000097
Figure BDA0001356983290000097

Figure BDA0001356983290000098
Figure BDA0001356983290000098

其中,L(c,Tu,Td,λ)为时延保障下的系统平均总能耗最小化问题的拉格朗日函数,λ为拉格朗日因子。Among them, L(c, T u , T d , λ) is the Lagrangian function of the problem of minimizing the average total energy consumption of the system under the guarantee of delay, and λ is the Lagrangian factor.

A3)构建时延保障下的系统平均总能耗最小化问题的对偶问题:A3) Construct the dual problem of minimizing the average total energy consumption of the system under the guarantee of delay:

Figure BDA0001356983290000101
Figure BDA0001356983290000101

A4)迭代求解所述对偶问题,获得最优分配方案,初始化循环次数t=1和拉格朗日乘数

Figure BDA0001356983290000102
A4) Iteratively solve the dual problem, obtain the optimal allocation scheme, initialize the number of loops t=1 and the Lagrange multiplier
Figure BDA0001356983290000102

A5)求解λt下的拉格朗日松弛问题:A5) Solve the Lagrangian relaxation problem under λ t :

首先通过求解如下背包问题,得到λt下的最优缓存方案

Figure BDA0001356983290000103
First, by solving the following knapsack problem, the optimal caching scheme under λ t is obtained
Figure BDA0001356983290000103

Figure BDA0001356983290000104
Figure BDA0001356983290000104

Figure BDA0001356983290000105
Figure BDA0001356983290000105

Figure BDA0001356983290000106
Figure BDA0001356983290000106

其中,e1,n(X,H,λ)由下式定义:where e 1,n (X,H,λ) is defined by:

e1,n(X,H,λ)=p(X,H)(Eu,n(f(X,H,Lu,n,Hu,n,λ),X,H)+Ee,n(X))+λf(X,H,Lu,n,Hu,n,λ)e 1,n (X,H,λ)=p(X,H)(E u,n (f(X,H,L u,n ,H u,n ,λ),X,H)+E e ,n (X))+λf(X,H,L u,n ,H u,n ,λ)

其中,p(X,H)表示系统状态(X,H)出现的概率,f(X,H,Lu,n,Hu,n,λ)由下式定义:Among them, p(X,H) represents the probability of the system state (X,H) appearing, and f(X,H,L u,n ,H u,n ,λ) is defined by the following formula:

Figure BDA0001356983290000107
Figure BDA0001356983290000107

其中,W(·)是朗伯函数,Kn(X)为具有待执行任务n的移动设备的总数,B为带宽,n0为高斯白噪声的方差;where W( ) is the Lambertian function, K n (X) is the total number of mobile devices with task n to be performed, B is the bandwidth, and n 0 is the variance of white Gaussian noise;

其次,根据以下公式计算λt下的最优上传时间方案

Figure BDA0001356983290000108
和最优下载时间方案
Figure BDA0001356983290000109
Second, calculate the optimal upload time scheme under λ t according to the following formula
Figure BDA0001356983290000108
and optimal download time scheme
Figure BDA0001356983290000109

Figure BDA00013569832900001010
Figure BDA00013569832900001010

Figure BDA00013569832900001011
Figure BDA00013569832900001011

A6)计算g(λt)的次梯度:A6) Calculate the subgradient of g(λ t ):

Figure BDA00013569832900001012
Figure BDA00013569832900001012

更新λt+1(X,H)=max{λt(X,H)+αts(X,H,λt),0},其中αt=(1+m)/(t+m),m是非负常数。Update λ t+1 (X,H)=max{λ t (X,H)+α t s(X,H,λ t ),0}, where α t =(1+m)/(t+m ), m is a non-negative constant.

A7)判断是否满足s(X,H,λt)<ε,ε为设置阈值,若是,则输出此时的

Figure BDA00013569832900001013
Figure BDA00013569832900001014
Figure BDA00013569832900001015
作为最优分配方案,终止,若否,更新迭代次数t=t+1,进入步骤A5)。A7) Judging whether s(X, H, λ t )<ε is satisfied, ε is the set threshold, if so, output the current
Figure BDA00013569832900001013
Figure BDA00013569832900001014
and
Figure BDA00013569832900001015
As the optimal allocation scheme, terminate, if not, update the number of iterations t=t+1, and go to step A5).

二、基于任务缓存和传输优化机制实现低复杂度次优任务缓存和上传下载时间的资源分配具体为,假定基站不缓存所有待执行任务的结果,得到上传下载时间分配的最优解后重新优化任务缓存分配,在得到一近似任务缓存方案后,根据该近似任务缓存方案重新优化上传下载时间分配方案,如图3所示,包括以下步骤:2. Realization of low-complexity sub-optimal task caching and resource allocation of upload and download time based on task caching and transmission optimization mechanism Specifically, assuming that the base station does not cache the results of all tasks to be executed, the optimal solution for upload and download time allocation is obtained and then re-optimized Task cache allocation, after obtaining an approximate task cache scheme, re-optimize the upload and download time allocation scheme according to the approximate task cache scheme, as shown in Figure 3, including the following steps:

B1)假设基站缓存容量C=0,在每个系统状态

Figure BDA0001356983290000111
下构建如下时间分配问题:B1) Assuming the base station buffer capacity C=0, in each system state
Figure BDA0001356983290000111
The following time allocation problem is constructed as follows:

Figure BDA0001356983290000112
Figure BDA0001356983290000112

Figure BDA0001356983290000113
Figure BDA0001356983290000113

Figure BDA0001356983290000114
Figure BDA0001356983290000114

B2)求解所述时间分配问题,根据以下两个公式,利用二分法得到满足

Figure BDA0001356983290000115
的拉格朗日因子
Figure BDA0001356983290000116
B2) Solving the time allocation problem, according to the following two formulas, the dichotomy method is used to satisfy
Figure BDA0001356983290000115
Lagrangian factor
Figure BDA0001356983290000116

Figure BDA0001356983290000117
Figure BDA0001356983290000117

Figure BDA0001356983290000118
Figure BDA0001356983290000118

进而确定上传下载时间分配的最优解

Figure BDA0001356983290000119
Figure BDA00013569832900001110
Then determine the optimal solution of upload and download time allocation
Figure BDA0001356983290000119
and
Figure BDA00013569832900001110

Figure BDA00013569832900001111
Figure BDA00013569832900001111

B3)重新考虑基站缓存,即令C≠0,用贪心算法求得近似任务缓存方案,即求解如下背包问题的近似解

Figure BDA00013569832900001112
B3) Reconsider the base station cache, even if C≠0, use the greedy algorithm to obtain an approximate task cache solution, that is, to solve the approximate solution of the following knapsack problem
Figure BDA00013569832900001112

Figure BDA00013569832900001113
Figure BDA00013569832900001113

B4)基于所述近似任务缓存方案,利用以下公式获得每个系统状态

Figure BDA00013569832900001114
下、近似任务缓存方案
Figure BDA00013569832900001115
下的最优上传下载时间分配方案
Figure BDA00013569832900001116
Figure BDA00013569832900001117
B4) Based on the approximate task caching scheme, each system state is obtained using the following formula
Figure BDA00013569832900001114
Next, approximate task caching scheme
Figure BDA00013569832900001115
The optimal upload and download time allocation scheme under
Figure BDA00013569832900001116
and
Figure BDA00013569832900001117

Figure BDA00013569832900001118
Figure BDA00013569832900001118

Figure BDA00013569832900001119
Figure BDA00013569832900001119

Figure BDA00013569832900001120
Figure BDA00013569832900001120

以上详细描述了本发明的较佳具体实施例。应当理解,本领域的普通技术人员无需创造性劳动就可以根据本发明的构思作出诸多修改和变化。因此,凡本技术领域中技术人员依本发明的构思在现有技术的基础上通过逻辑分析、推理或者有限的实验可以得到的技术方案,皆应在由权利要求书所确定的保护范围内。The preferred embodiments of the present invention have been described in detail above. It should be understood that those skilled in the art can make many modifications and changes according to the concept of the present invention without creative efforts. Therefore, all technical solutions that can be obtained by those skilled in the art through logical analysis, reasoning or limited experiments on the basis of the prior art according to the concept of the present invention shall fall within the protection scope determined by the claims.

Claims (6)

1. A resource allocation method suitable for a mobile edge computing scene is characterized in that the method realizes resource allocation of optimal task cache and uploading and downloading time or low-complexity suboptimal task cache and uploading and downloading time based on a task cache and transmission optimization mechanism;
the task caching and transmission optimizing mechanism comprises the following steps:
when the calculation result of the task to be executed by the mobile equipment is cached by the base station, the mobile equipment downloads the calculation result of the task from the base station, when the calculation result of the task to be executed by the mobile equipment is not cached by the base station, the mobile equipment uploads the task to the base station for calculation, and then downloads the calculation result of the task from the base station, wherein when a plurality of mobile equipment upload the same task to the base station, the base station selects the mobile equipment with the best channel to realize uploading, and when a plurality of mobile equipment download the calculation result of the same task, the base station sends the calculation result of the task once in a multicast mode, and enables the mobile equipment with the worst channel to receive the calculation result just successfully;
the resource allocation for realizing the optimal task caching and uploading and downloading time based on the task caching and transmission optimization mechanism specifically comprises the following steps: constructing a system average total energy consumption minimization problem under the time delay guarantee, and solving to obtain an optimal distribution scheme;
the resource allocation of the optimal task caching and uploading and downloading time comprises the following steps:
A1) the problem of minimizing the average total energy consumption of the system under the guarantee of the construction time delay is as follows:
Figure FDA0002713251450000011
wherein,
Figure FDA0002713251450000012
E(c,Tu(X,H),Td(X, H), wherein X and H are total energy consumption of the system,
Figure FDA0002713251450000013
cnrepresenting the buffer condition of the calculation result of the task n in the base station, cn1 indicates that the calculation result of task n is buffered at the base station, cn0 indicates that the calculation result of task n is not buffered by the base station,
Figure FDA0002713251450000014
representing a set of tasks in the system, X representing a random state of the system tasks, H representing a random state of the system channels, Tu(X, H) and Td(X, H) respectively represents each task uploading time vector T under the system state (X, H)uAnd each task download time vector Td,Ld,nCalculate the size of the result, L, for task nd,n>0, C is base station buffer capacity, tu,nFor the upload time of task n, td,nThe time is the download time of the task n, and T is the total time limit of uploading and downloading;
A2) constructing the lagrangian relaxation problem:
Figure FDA0002713251450000016
Figure FDA0002713251450000021
wherein, L (c, T)u,TdLambda) is a Lagrange function of the system average total energy consumption minimization problem under the time delay guarantee, and lambda is a Lagrange factor;
A3) the dual problem of the problem of minimizing the average total energy consumption of the system under the guarantee of the construction time delay is as follows:
Figure FDA0002713251450000022
Figure FDA00027132514500000210
A4) iteratively solving the dual problem to obtain an optimal distribution scheme;
the resource allocation for realizing the low-complexity suboptimal task caching and uploading and downloading time based on the task caching and transmission optimization mechanism specifically comprises the following steps:
supposing that the base station does not cache the results of all the tasks to be executed, re-optimizing the task cache allocation after obtaining the optimal solution of the uploading and downloading time allocation, and re-optimizing the uploading and downloading time allocation scheme according to the approximate task cache scheme after obtaining an approximate task cache scheme;
the resource allocation of the low-complexity suboptimum task caching and uploading and downloading time comprises the following steps:
B1) assuming that the base station buffer capacity C is 0, in each system state
Figure FDA0002713251450000023
The following time allocation problem is constructed:
Figure FDA0002713251450000024
wherein E isn(0,tu,n,td,nX, H) represents the system energy consumption for task n, tu,nUpload time, t, for uploading task n to base stationd,nThe download time for the base station to send the calculation result of task n, X represents the random system task state, H represents the random system channel state,
Figure FDA0002713251450000029
representing a set of tasks in the system, wherein T is the total time limit of uploading and downloading;
B2) solving the time distribution problem, and obtaining the time distribution problem by using a dichotomy according to the following two formulas
Figure FDA0002713251450000025
Lagrange's factor of
Figure FDA0002713251450000026
Figure FDA0002713251450000027
Figure FDA0002713251450000028
Wherein, f (X, H, L)u,n,Hu,nλ), is defined by the following formula:
Figure FDA0002713251450000031
where p (X, H) represents the probability of occurrence of the system state (X, H), W (-) is a Lambert function, Kn(X) is the total number of mobile devices with task n to be performed, B is the bandwidth, n0Is the variance of Gaussian white noise;
further determining the optimal solution of uploading and downloading time distribution
Figure FDA0002713251450000032
And
Figure FDA0002713251450000033
Figure FDA0002713251450000034
B3) reconsidering base station caching, i.e. making C not equal to 0, and solving an approximate task caching scheme by using a greedy algorithm, i.e. solving an approximate solution of the following knapsack problem
Figure FDA0002713251450000035
Figure FDA0002713251450000036
Wherein, cnRepresenting the buffer condition of the calculation result of the task n in the base station, cn1 indicates that the calculation result of task n is buffered at the base station, cn0 indicates that the calculation result of task n is not cached by the base station, Ld,nCalculate the size of the result, L, for task nd,n>0,e1,n(X, H, λ) is defined by the formula:
e1,n(X,H,λ)=p(X,H)(Eu,n(f(X,H,Lu,n,Hu,n,λ),X,H)+Ee,n(X))+λf(X,H,Lu,n,Hu,n,λ);
B4) based on the approximate task caching scheme, each system state is obtained by using the following formula
Figure FDA0002713251450000037
Lower and approximate task caching scheme
Figure FDA0002713251450000038
Optimal uploading and downloading time distribution scheme
Figure FDA0002713251450000039
And
Figure FDA00027132514500000310
Figure FDA00027132514500000311
Figure FDA00027132514500000312
Figure FDA00027132514500000313
2. the resource allocation method for a moving edge computing scenario as claimed in claim 1, wherein in step a1), the total energy consumption of the system is defined as:
Figure FDA00027132514500000314
Figure FDA00027132514500000315
wherein E isu,n(tu,nX, H) indicates that the mobile device uploads time t when the system state is (X, H)u,nTransmission energy loss for uploading task n to base station, Ee,n(X) represents the energy consumption for executing task n and obtaining the calculation result thereof at the base station, Ed,n(td,nX, H) denotes the base station download time td,nAnd transmitting transmission energy required by the calculation result.
3. The method of claim 2, wherein E is the resource allocation rule for a moving edge computing scenariou,n(tu,nX, H) is defined as
Figure FDA0002713251450000041
Wherein Hu,nFor the maximum value of the channel power between all mobile devices having task n to be performed and the base station, Lu,nRefers to the size of the task n input, Lu,n>0,Kn(X) is the total number of mobile devices with task n to perform, and the function g (-) is defined as:
Figure FDA0002713251450000042
b is the bandwidth, n0Is the variance of gaussian white noise.
4. The method of claim 2, wherein E is the resource allocation rule for a moving edge computing scenariod,n(td,nX, H) is defined as
Figure FDA0002713251450000043
Wherein Hd,nFor the minimum value of the channel power between all mobile devices having task n to be performed and the base station, Kn(X) is the total number of mobile devices with task n to perform, and the function g (-) is defined as:
Figure FDA0002713251450000044
b is the bandwidth, n0Is the variance of gaussian white noise.
5. The method of claim 2, wherein E is the resource allocation rule for a moving edge computing scenarioe,n(X) is defined as
Figure FDA0002713251450000045
Wherein, FbMu is a constant factor determined by the switched capacitance of the server, L, for the base station frequencye,nFor the workload of task n, i.e. the number of CPU cycles required to execute task n, Kn(X) is the total number of mobile devices having task n to perform.
6. The resource allocation method applicable to the mobile edge computing scenario as claimed in claim 2, wherein step a4) is specifically:
A401) number of initialization cycles t 1 and lagrange multiplier
Figure FDA0002713251450000046
A402) Solving for λtThe following lagrangian relaxation problem:
first, by solving the following knapsack problem, λ is obtainedtOptimal caching scheme of
Figure FDA0002713251450000047
Figure FDA0002713251450000048
Figure FDA0002713251450000049
Figure FDA0002713251450000051
Wherein e is1,n(X, H, λ) is defined by the formula:
e1,n(X,H,λ)=p(X,H)(Eu,n(f(X,H,Lu,n,Hu,n,λ),X,H)+Ee,n(X))+λf(X,H,Lu,n,Hu,n,λ)
wherein p (X, H) represents the probability of occurrence of the system state (X, H), and f (X, H, L)u,n,Hu,nλ), is defined by the following formula:
Figure FDA0002713251450000052
wherein W (-) is a Lambert function, Kn(X) is the total number of mobile devices with task n to be performed, B is the bandwidth, n0Is the variance of Gaussian white noise;
next, λ is calculated according to the following formulatOptimal upload time scheme of
Figure FDA0002713251450000053
And optimal download time scheme
Figure FDA0002713251450000054
Figure FDA0002713251450000055
Figure FDA0002713251450000056
A403) Calculating g (lambda)t) The secondary gradient of (a):
Figure FDA0002713251450000057
updating lambdat+1(X,H)=max{λt(X,H)+αts(X,H,λt) 0} where α ist(1+ m)/(t + m), m being a non-negative constant;
A404) judging whether s (X, H, lambda) is satisfiedt) If epsilon is less than epsilon, epsilon is a set threshold value, if epsilon is less than epsilon, then output of the current time
Figure FDA0002713251450000058
Figure FDA0002713251450000059
And
Figure FDA00027132514500000510
as the optimal allocation scheme, the process is terminated, and if not, the number of update iterations t is t +1, and the process proceeds to step a 402).
CN201710600370.2A 2017-07-21 2017-07-21 Resource allocation method suitable for mobile edge computing scene Active CN107708135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710600370.2A CN107708135B (en) 2017-07-21 2017-07-21 Resource allocation method suitable for mobile edge computing scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710600370.2A CN107708135B (en) 2017-07-21 2017-07-21 Resource allocation method suitable for mobile edge computing scene

Publications (2)

Publication Number Publication Date
CN107708135A CN107708135A (en) 2018-02-16
CN107708135B true CN107708135B (en) 2021-01-22

Family

ID=61170090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710600370.2A Active CN107708135B (en) 2017-07-21 2017-07-21 Resource allocation method suitable for mobile edge computing scene

Country Status (1)

Country Link
CN (1) CN107708135B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108738046B (en) * 2018-04-17 2021-04-06 浙江工业大学 A rate maximization method for mobile edge computing based on semi-supervised learning
CN108834080B (en) * 2018-04-17 2021-03-19 东南大学 Distributed caching and user association method based on multicast technology in heterogeneous networks
CN108632862B (en) * 2018-04-17 2021-06-18 浙江工业大学 Mobile edge computing offloading decision method based on deep deterministic policy gradient
CN108738045B (en) * 2018-04-17 2021-04-06 浙江工业大学 Moving edge calculation rate maximization method based on depth certainty strategy gradient
CN108632860B (en) * 2018-04-17 2021-06-18 浙江工业大学 A mobile edge computing rate maximization method based on deep reinforcement learning
CN108632861B (en) * 2018-04-17 2021-06-18 浙江工业大学 A mobile edge computing offload decision-making method based on deep reinforcement learning
CN108833468B (en) * 2018-04-27 2021-05-11 广州西麦科技股份有限公司 Video processing method, device, equipment and medium based on mobile edge calculation
CN108712755B (en) * 2018-05-18 2021-02-26 浙江工业大学 Non-orthogonal access uplink transmission time optimization method based on deep reinforcement learning
CN108848514B (en) * 2018-06-20 2021-08-03 中国联合网络通信集团有限公司 Data communication optimization method and data communication optimizer
FR3086493B1 (en) * 2018-09-20 2020-08-28 Renault Sas PROCESS FOR REASSIGNING A PERIPHERAL DATA PROCESSING SERVER
CN109089272B (en) * 2018-09-21 2021-10-15 浙江工业大学 Brent-style latency optimization method for mobile edge computing in multi-base station scenarios
CN109067490B (en) * 2018-09-29 2020-10-30 郑州航空工业管理学院 Resource allocation method for multi-UAV cooperative mobile edge computing system under cellular network connection
CN109413724B (en) * 2018-10-11 2021-09-03 重庆邮电大学 MEC-based task unloading and resource allocation scheme
CN109462879B (en) * 2018-12-17 2020-07-31 中国科学院计算技术研究所 An admission control method and system
CN109743099A (en) * 2019-01-10 2019-05-10 深圳市简智联信息科技有限公司 Mobile edge calculations system and its resource allocation methods
CN109885397B (en) * 2019-01-15 2023-04-07 长安大学 Delay optimization load task migration algorithm in edge computing environment
CN109814951B (en) * 2019-01-22 2021-09-28 南京邮电大学 Joint optimization method for task unloading and resource allocation in mobile edge computing network
CN110234127B (en) * 2019-06-11 2022-04-01 重庆邮电大学 SDN-based fog network task unloading method
CN110351760B (en) * 2019-07-19 2022-06-03 重庆邮电大学 Dynamic task unloading and resource allocation method for mobile edge computing system
CN110248206B (en) * 2019-07-29 2020-08-28 北京邮电大学 A resource allocation method, device and electronic device for edge network system
CN110602103B (en) * 2019-09-17 2021-09-10 中国联合网络通信集团有限公司 Electronic lock protocol conversion optimization method and electronic lock protocol conversion optimizer
CN110958675B (en) * 2019-10-29 2021-08-17 南京邮电大学 Terminal access method based on 5G fog computing node
CN110913429B (en) * 2019-11-07 2023-04-11 北京长焜科技有限公司 Wireless edge shunting transparent access method
CN111132191B (en) * 2019-12-12 2022-04-01 重庆邮电大学 Method for unloading, caching and resource allocation of joint tasks of mobile edge computing server
CN111263303B (en) * 2020-01-15 2021-02-02 北京交通大学 A method for self-organization and cooperation of fog nodes based on mobile IP
CN111475301B (en) * 2020-04-09 2021-06-11 清华大学 Satellite resource allocation method and device and electronic equipment
CN114867039B (en) * 2022-04-15 2024-06-21 河海大学 An edge computing offloading method for intermediate sea area scenarios

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789834A (en) * 2010-01-29 2010-07-28 中国人民解放军理工大学 Cooperation spectrum sensing method based on network encoding
CN106231607A (en) * 2016-09-21 2016-12-14 北京佰才邦技术有限公司 The method of a kind of resource distribution and base station

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7996350B2 (en) * 2008-03-05 2011-08-09 The Boeing Company Virtual intelligent fabric

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101789834A (en) * 2010-01-29 2010-07-28 中国人民解放军理工大学 Cooperation spectrum sensing method based on network encoding
CN106231607A (en) * 2016-09-21 2016-12-14 北京佰才邦技术有限公司 The method of a kind of resource distribution and base station

Also Published As

Publication number Publication date
CN107708135A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN107708135B (en) Resource allocation method suitable for mobile edge computing scene
Hu Mobility-aware edge caching and computing in vehicle networks: A deep reinforcement learning
CN110098969B (en) Fog computing task unloading method for Internet of things
CN107333267B (en) A kind of edge calculations method for 5G super-intensive networking scene
Cui et al. Energy-efficient resource allocation for cache-assisted mobile edge computing
CN113950103A (en) Multi-server complete computing unloading method and system under mobile edge environment
CN111930436A (en) Random task queuing and unloading optimization method based on edge calculation
CN110012039B (en) ADMM-based task allocation and power control method in Internet of vehicles
CN110418416A (en) Resource allocation method based on multi-agent reinforcement learning in mobile edge computing system
CN113434212A (en) Cache auxiliary task cooperative unloading and resource allocation method based on meta reinforcement learning
CN107995660A (en) Joint Task Scheduling and Resource Allocation Method Supporting D2D-Edge Server Offloading
Zhong et al. Cooperative service caching and computation offloading in multi-access edge computing
CN107682443A (en) An Efficient Offloading Method for Computational Tasks in Mobile Edge Computing Systems Considering Latency and Energy Consumption Jointly
CN108521436B (en) Mobile virtual reality transmission method and system based on terminal computing storage
CN113064665B (en) Multi-server computing unloading method based on Lyapunov optimization
CN111565380B (en) Hybrid offloading method based on NOMA-MEC in the Internet of Vehicles
CN110856259A (en) Resource allocation and offloading method for adaptive data block size in mobile edge computing environment
CN112491957B (en) Distributed computing offloading method and system in edge network environment
Zheng et al. MEC-enabled wireless VR video service: A learning-based mixed strategy for energy-latency tradeoff
CN110489176A (en) A Multi-access Edge Computing Task Offloading Method Based on Bin Packing Problem
Huang et al. Federated learning based QoS-aware caching decisions in fog-enabled internet of things networks
Zu et al. SMETO: Stable matching for energy-minimized task offloading in cloud-fog networks
CN111556576B (en) Time delay optimization method based on D2D _ MEC system
Dai et al. Proactive caching over cloud radio access network with user mobility and video segment popularity awared
CN114564248A (en) A method for computing offload based on user movement patterns in mobile edge computing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant