[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024174426A1 - Procédé de délestage de tâche et d'attribution de ressources basé sur un calcul de périphérie mobile - Google Patents

Procédé de délestage de tâche et d'attribution de ressources basé sur un calcul de périphérie mobile Download PDF

Info

Publication number
WO2024174426A1
WO2024174426A1 PCT/CN2023/100968 CN2023100968W WO2024174426A1 WO 2024174426 A1 WO2024174426 A1 WO 2024174426A1 CN 2023100968 W CN2023100968 W CN 2023100968W WO 2024174426 A1 WO2024174426 A1 WO 2024174426A1
Authority
WO
WIPO (PCT)
Prior art keywords
task
offloading
resource allocation
base station
processing
Prior art date
Application number
PCT/CN2023/100968
Other languages
English (en)
Chinese (zh)
Inventor
李云
高倩
姚枝秀
夏士超
梁吉申
Original Assignee
重庆邮电大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 重庆邮电大学 filed Critical 重庆邮电大学
Publication of WO2024174426A1 publication Critical patent/WO2024174426A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0925Management thereof using policies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/09Management thereof
    • H04W28/0958Management thereof based on metrics or performance parameters
    • H04W28/0967Quality of Service [QoS] parameters
    • H04W28/0975Quality of Service [QoS] parameters for reducing delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/14Flow control between communication endpoints using intermediate storage
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present invention belongs to the technical field of wireless communications, and in particular relates to a task offloading and resource allocation method based on mobile edge computing.
  • MDs mobile devices
  • VR virtual reality
  • AR augmented reality
  • telemedicine etc.
  • MEC Mobile Edge Computing
  • MEC sinks the computing power, storage, and other resources of the cloud center to the edge of the network, and drives users to offload computing tasks to the edge of the network to enjoy a high-performance computing service experience.
  • Deep reinforcement learning combines the perception ability of deep learning and the decision-making ability of reinforcement learning, and can effectively handle various decision-making problems in MEC systems.
  • a method for calculating deep reinforcement learning in vehicle multi-access edge computing is The resource management method studies the joint allocation of spectrum, computing and storage resources in MEC vehicle networks, and uses DDPG and hierarchical learning to achieve rapid resource allocation, meeting the service quality requirements of vehicle applications.
  • a dynamic computing offloading and resource allocation method based on deep reinforcement learning in a cache-assisted mobile edge computing system studies the dynamic caching, computing offloading and resource allocation problems in cache-assisted MEC systems, and proposes an intelligent dynamic scheduling strategy based on DRL.
  • the above methods all use single-agent deep reinforcement learning algorithms. Single-agent deep reinforcement learning algorithms require a stable environment, while the actual network environment is often dynamically changing. The environment is unstable, which is not conducive to convergence, and it also makes techniques such as experience replay unable to be used directly.
  • the present invention proposes a task offloading and resource allocation method based on mobile edge computing, which includes:
  • the computationally intensive task generated at time slot t(t ⁇ T) is defined as in, Indicates the data size of the task. represents the maximum tolerable delay of the task, Indicates the number of CPU cycles required to process a unit bit task. represents the service type required for processing tasks; the tasks generated by all users under base station BS m are represented as
  • constructing the service assignment model in step S2 specifically includes: for any user There are four task processing modes, and different task processing modes have different processing delays; the four task processing modes are: local calculation, offloading to the associated BS m for processing, forwarding the offloaded tasks to other BSs for processing through the associated base station, and offloading to the cloud center for processing.
  • the task processing delay Indicates the task processing delay when the user performs local computing.
  • T tr,m (t) represents the delay of the task being forwarded by the associated base station, represents the time delay of other base stations processing tasks
  • T m,c (t) represents the transmission delay of tasks forwarded to the cloud center through associated base stations
  • the task offloading and resource allocation joint optimization problem is expressed as:
  • T represents the system operation time
  • M represents the number of base stations
  • a(t) represents the base station service cache strategy
  • b(t) represents represents the task offloading strategy
  • ⁇ (t) represents the spectrum resource allocation strategy
  • ⁇ (t) represents the base station computing resource allocation strategy
  • Nm represents the number of user devices under the mth base station
  • Nm represents the number of user devices under the mth base station
  • Nm represents the number of user devices under the mth base station
  • Nm represents the number of user devices under the mth base station
  • Nm represents the number of user devices under the mth base station
  • Nm represents the number of user devices under the mth base station
  • Nm represents the number of user devices under the mth base station
  • Nm represents the number of user devices under the mth base station
  • Nm represents the number of user devices under the mth base station
  • Nm represents the number of user devices under the mth base station
  • Nm represents the number of user devices under the
  • the process of using the DSRA algorithm to solve the joint optimization problem of task offloading and resource allocation includes: abstracting the joint optimization problem of task offloading and resource allocation into a partially observable Markov decision process, with the base station acting as an intelligent agent, and constructing the corresponding observation space, action space and reward function; each intelligent agent has an actor network and a critic network embedded in an LSTM network; the actor network generates corresponding actions according to the current local observation state of a single intelligent agent and updates the reward function according to the action, and enters the next state; the critic network estimates the strategies of other intelligent agents based on the global observation state and action; generates experience information based on the current state, next state, action and reward value; samples multiple pieces of experience information to train the actor network and the critic network, updates the network parameters, and obtains the trained actor network and the critic network; and obtains the task offloading and resource allocation strategy based on the actor network training results.
  • r m (t) represents the reward value of base station BS m at time slot t
  • T represents the system running time
  • M represents the number of base stations
  • N m represents the number of user equipment under the mth base station
  • Y m (t) represents the reward when the task processing delay satisfies the delay constraint
  • U m (t) represents the reward when the cache does not exceed the storage capacity limit of the edge server.
  • the present invention aims at the service orchestration and computing network resource allocation problems in the decentralized MEC scenario, and proposes a task offloading and resource allocation method based on mobile edge computing with the goal of minimizing task processing delay; considering the time dependency of user service requests and the coupling relationship between service requests and service cache, an LSTM network is introduced to extract historical status information about service requests, so that users can make better decisions by learning this historical information. Through simulation experiments, this method can achieve lower latency and higher cache hit rate, and realize on-demand resource allocation.
  • FIG1 is a flow chart of a method for task offloading and resource allocation based on mobile edge computing in the present invention
  • FIG2 is a schematic diagram of a mobile edge computing system model in the present invention.
  • FIG3 is a block diagram of the DSRA algorithm in the present invention.
  • FIG4 is a diagram showing the variation of the average delay of the DSRA algorithm and the comparison algorithm in the present invention with the number of training iterations;
  • FIG5 is a diagram showing how the average cache hit rate of the DSRA algorithm of the present invention and the comparison algorithm changes with the number of training iterations.
  • the present invention proposes a task offloading and resource allocation method based on mobile edge computing, as shown in FIG1 , the method includes the following contents:
  • the present invention considers a typical MEC system, which includes M base stations (BS) and defines a base station set: Each BS is equipped with a MEC server with certain computing and storage resources; There are N m user devices MD under the mth base station, and the user set under the mth base station is defined as The system operates in discrete time slots, defining the time set For the i-th user under BS m, set Time slot t
  • the resulting computationally intensive task is defined as in, Indicates the data size of the task, in bits; represents the maximum tolerable delay of the task, Indicates the number of CPU cycles required to process a unit bit task; represents the service type required to process the task.
  • the tasks generated by all users under BS m are expressed as
  • S2 Construct service cache model and service assignment model based on the mobile edge computing system model.
  • Building a service cache model specifically includes:
  • a service refers to a specific program or data required to run various types of tasks (such as games, virtual/augmented reality).
  • tasks such as games, virtual/augmented reality.
  • MEC server that caches the corresponding service Only then can it provide computing services for MD's offloading tasks.
  • Building a service assignment model specifically includes:
  • BS m caches the processing Type of service required for the task Then the task can be processed by BS m , otherwise, the task can only be processed locally on the device or offloaded to other servers.
  • Indicates that at time slot t The task offloading strategy is: express The local task processing strategy, Indicates that the task can be processed locally.
  • Indicates the strategy of offloading tasks to the associated base station for processing represents the strategy of offloading tasks to neighboring base stations for processing. represents the strategy of offloading tasks to the cloud center for processing; the task offloading strategy for all users under base station BS m in time slot t is
  • the local processing time of the task can be expressed as Indicates the data size of the task, in bits. Indicates the CPU required to process a unit bit task Number of cycles.
  • the transmission rate of the uplink to BS m is
  • Bm is the bandwidth of BS m , Assigned to BS m in time slot t
  • the spectrum resource allocation coefficient satisfies Assigned to BS m
  • the bandwidth of BS m, the spectrum resource allocation strategy of BS m can be expressed as express
  • the channel gain between BS m and BS m, ⁇ 2 (t) represents the additive white Gaussian noise power in time slot t .
  • the transmission delay of the task is
  • f m represents the CPU frequency of BS m , Assigned to BS m in time slot t
  • the CPU frequency allocation coefficient satisfies Indicates that BS m is allocated to
  • the CPU frequency of BS m can be expressed as
  • the processing result of the task is usually much smaller than the uploaded data, and the present invention ignores the delay of returning the result.
  • the associated base station BS m does not cache service k, but its nearby base station BS n (n ⁇ 1,2,...,M ⁇ and n ⁇ m) caches service k, then
  • the task can be performed by the associated base station BS m forwarded, and migrated to other nearby base stations BS n for processing, that is, At time slot t, the transmission rate of tasks forwarded from the associated base station to nearby base stations is Among them, ⁇ m is the bandwidth of base station m when forwarding the task, P m is the forwarding power of base station m, G m,n is the channel gain between base stations m and n, then the time for the task to be forwarded by the associated base station is:
  • the task can also be forwarded by the associated base station BS m to the cloud center for processing, that is, The cloud center has abundant computing resources and storage resources, and the present invention ignores the task processing time and result transmission time of the cloud center.
  • the computational offloading time of the task is forwarded to the cloud center through the associated base station BS m.
  • r m,c (t) is the transmission rate at which BS m forwards tasks to the cloud center.
  • the delay of offloading tasks to the cloud center for processing is
  • the task processing delay represents the user under base station BS m at time slot t
  • the task processing delay represents the user under base station BS m at time slot t
  • the transmission delay of offloading the task to the associated base station represents the delay of the associated base station processing the task
  • T tr,m (t) represents the delay of the task being forwarded by the associated base station
  • Tm ,c (t) represents the number of users under base station BS m in time slot t
  • S3 Based on the service cache model and service assignment model, establish task offloading and resource allocation constraints.
  • the storage space of the MEC server is limited, and the storage space occupied by the cached services cannot exceed the storage capacity of the MEC server.
  • the size of the storage space of the mth MEC server MECm as R m , then Where l k represents the size of the storage space occupied by the service that processes the task.
  • the processing delay of the task cannot exceed the maximum tolerable delay:
  • the total amount of allocated spectrum resources should not be greater than the base station bandwidth:
  • the total amount of allocated computing resources should not be greater than the base station computing resources:
  • the server's resources such as computing, spectrum and storage space
  • task offloading and resource allocation are coupled with each other.
  • the present invention aims to minimize the long-term processing delay of tasks.
  • the joint optimization problem of service cache and computing network resource allocation is established and expressed as:
  • T represents the system operation time
  • M represents the number of base stations
  • ⁇ (t) ⁇ 1 (t)
  • ⁇ M (t) ⁇ represents the base station computing resource allocation strategy
  • N m represents the number of user devices under the m-th base station
  • the maximum tolerable delay of the task is represents the user under base station BS m at time slot t
  • the local task processing strategy Indicates user The strategy of offloading the task to the associated base station for processing, Indicates user The strategy of offloading the task to other base stations for processing is Indicates user The strategy of offloading the task to the cloud center
  • the present invention designs a distributed intelligent service arrangement and computing network resource allocation algorithm (Distributed Service Arrangement and Resource Allocation Algorithm, DSRA) based on multi-agent deep reinforcement learning, in which the base station is used as an agent to learn task offloading strategies, service caching strategies, and computing network resource allocation strategies.
  • DSRA distributed Service Arrangement and Resource Allocation Algorithm
  • the LSTM network is used to extract historical status information about service requests. By learning these historical information, the agent can better understand the future environmental status and make better decisions. As shown in Figure 3, it specifically includes the following contents:
  • the joint optimization problem of task offloading and resource allocation is abstracted into a partially observable Markov decision process (POMDP), with the base station acting as the intelligent agent, and constructing the corresponding observation space, action space and reward function; defining the tuple Describe the above Markov game process, where Represents the global state space, and the environment of time slot t is the global state is the observation space set of the agent, is the global action space set, is the reward set.
  • agent m observes Take strategy ⁇ m : Select the corresponding action Get corresponding rewards
  • the agent can receive detailed task information from mobile devices within its coverage, including the data size of the task, the maximum tolerable delay, the number of CPU cycles required to process the task per bit, and the required service type.
  • the environment state observed by agent m is The definition is as follows:
  • Agent m selects the corresponding action from the action space according to the observed environment state o m (t) and the current strategy ⁇ m .
  • the action of agent m is The definition is as follows:
  • the reward function measures the effect of an action taken by an agent in a given state.
  • the agent takes an action in the t-1 time slot, and the corresponding reward will be returned to the agent in the t time slot.
  • the agent will update its strategy to obtain the optimal result. Since the reward causes each agent to reach its optimal strategy, and the strategy directly determines the computing network resource allocation strategy, computing offloading strategy and service caching strategy of the corresponding MEC server, the reward function should be designed according to the original optimization problem.
  • the reward function constructed by the present invention includes three parts: the first part is the reward for the task processing time, and the second part is the reward for the task processing delay satisfying the delay constraint, that is, The third part is the reward for caching that does not exceed the storage capacity limit of the edge server, i.e.
  • the optimization goal is to minimize the long-term processing delay of the task and maximize the long-term reward, so the cumulative reward of agent m should be:
  • H( ⁇ ) is the Heaviside step function
  • ⁇ 1 and ⁇ 2 represent the first and second weight coefficients respectively
  • Y m (t) represents the reward for the task processing delay satisfying the delay constraint
  • U m (t) represents the reward for the cache not exceeding the storage capacity limit of the edge server.
  • Each base station has an actor network and a critic network embedded in an LSTM network. Both the actor network and the critic network include the current network and the target network.
  • the framework of the DSRA algorithm consists of an environment and M agents, namely base stations. Each agent has a centralized training phase and a decentralized execution phase. During training, centralized learning is used to train the critic network and the actor network. The critic network training requires the use of State information of other agents. During distributed execution, the actor network only needs to know local information. That is, each agent will use the global state and action to estimate the strategies of other agents during training, and adjust the local strategy according to the estimated strategy of other agents to achieve the global optimum.
  • the Multi-agent Deep Deterministic Policy Gradient (MADDPG) algorithm can handle the situation where the environment is fully observable, while the real environment state is often partially observable.
  • the present invention adds the long short-term memory network LSTM to the actor network and the critic network.
  • LSTM is a recurrent neural network that can extract historical state information about business requests. By learning this historical information, the agent can better understand the future state and make better decisions.
  • the actor network generates corresponding actions based on the current local observation state of a single agent; specifically: the actor network obtains the current task offloading and resource allocation strategy based on the local observation state, and can generate corresponding actions from the action space based on the task offloading and resource allocation strategy; the agent enters the next state.
  • Update the reward function according to the action generate experience information according to the current state, next state, action and reward value; sample multiple pieces of experience information to train the actor network and critic network, update the network parameters, and obtain the trained actor network.
  • the experience replay memory D of the agent m contains a set of experience tuples, Where o m (t) represents the observed state of agent m in time slot t, and a m (t) represents the observed state of agent m in time slot t based on the current observation.
  • o m (t) represents the action taken by agent m
  • r m (t) represents the reward obtained after agent m takes action
  • o' m (t+1) represents the state of agent m in time slot t+1
  • each agent’s actor network uses the local observed state o m (t) and the current historical state information And its own strategy Select Action
  • each critic network can obtain the observations o m (t) and actions a m (t) of other agents, so the Q function of agent m can be expressed as
  • the Q function evaluates the actions of the actor network from a global perspective and guides the actor network to choose a better action.
  • the critic network updates the network parameters by minimizing the loss function, which is defined as follows:
  • the actor network updates the network parameters ⁇ based on the centralized Q function calculated by the critic network and its own observation information, and outputs action a.
  • the actor network parameters ⁇ are updated by maximizing the policy gradient, that is:
  • the parameters of the target network are updated by soft updating, namely:
  • the actions taken by the actor network can be used to obtain the task offloading, service caching and resource allocation strategies within the time period T.
  • Task offloading based on the task offloading and resource allocation strategies can minimize the total processing delay of the task while satisfying various constraints.
  • the present invention is compared with the multi-agent deep deterministic policy gradient algorithm MADDPG (Multi-agent Deep Deterministic Policy Gradient), the single-agent deep deterministic gradient algorithm SADDPG (Single-agent Deep Deterministic Policy Gradient) and the single-agent deep deterministic gradient algorithm TADPG based on LSTM.
  • MADDPG Multi-agent Deep Deterministic Policy Gradient
  • SADDPG Single-agent Deep Deterministic Policy Gradient
  • TADPG single-agent deep deterministic gradient algorithm

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

L'invention concerne un procédé de délestage de tâche et d'attribution de ressources basé sur un calcul de périphérie mobile, se rapportant au domaine technique des communications sans fil. Le procédé consiste à : construire un modèle de système informatique de périphérie mobile ; construire un modèle de cache de service et un modèle d'attribution de service sur la base du modèle de système informatique de périphérie mobile ; établir une condition de contrainte de délestage de tâche et d'attribution de ressources sur la base du modèle de cache de service et du modèle d'attribution de service ; selon la condition de délestage de tâche et de contrainte d'attribution de ressources, construire un problème d'optimisation conjointe de délestage de tâche et d'attribution de ressources pour l'objectif de minimisation d'un retard de traitement de tâche ; et résoudre le problème d'optimisation conjointe de délestage de tâche et d'attribution de ressources à l'aide d'un algorithme DSRA pour obtenir une stratégie de délestage de tâche et d'attribution de ressources. La présente invention peut réaliser un faible retard et un taux de réussite de cache élevé et réaliser une attribution de ressources à la demande.
PCT/CN2023/100968 2023-02-20 2023-06-19 Procédé de délestage de tâche et d'attribution de ressources basé sur un calcul de périphérie mobile WO2024174426A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310138344.8 2023-02-20
CN202310138344.8A CN116137724A (zh) 2023-02-20 2023-02-20 一种基于移动边缘计算的任务卸载及资源分配方法

Publications (1)

Publication Number Publication Date
WO2024174426A1 true WO2024174426A1 (fr) 2024-08-29

Family

ID=86333467

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/100968 WO2024174426A1 (fr) 2023-02-20 2023-06-19 Procédé de délestage de tâche et d'attribution de ressources basé sur un calcul de périphérie mobile

Country Status (2)

Country Link
CN (1) CN116137724A (fr)
WO (1) WO2024174426A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116137724A (zh) * 2023-02-20 2023-05-19 重庆邮电大学 一种基于移动边缘计算的任务卸载及资源分配方法
CN116743584B (zh) * 2023-08-09 2023-10-27 山东科技大学 一种基于信息感知及联合计算缓存的动态ran切片方法
CN118574161A (zh) * 2024-06-19 2024-08-30 中国传媒大学 基于gat-ddpg的无人机辅助车联网任务卸载策略

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111132191A (zh) * 2019-12-12 2020-05-08 重庆邮电大学 移动边缘计算服务器联合任务卸载、缓存及资源分配方法
US20220032933A1 (en) * 2020-07-31 2022-02-03 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for generating a task offloading strategy for a vehicular edge-computing environment
CN114760311A (zh) * 2022-04-22 2022-07-15 南京邮电大学 一种面向移动边缘网络系统的优化服务缓存及计算卸载方法
CN115297013A (zh) * 2022-08-04 2022-11-04 重庆大学 一种基于边缘协作的任务卸载及服务缓存联合优化方法
CN116137724A (zh) * 2023-02-20 2023-05-19 重庆邮电大学 一种基于移动边缘计算的任务卸载及资源分配方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111132191A (zh) * 2019-12-12 2020-05-08 重庆邮电大学 移动边缘计算服务器联合任务卸载、缓存及资源分配方法
US20220032933A1 (en) * 2020-07-31 2022-02-03 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for generating a task offloading strategy for a vehicular edge-computing environment
CN114760311A (zh) * 2022-04-22 2022-07-15 南京邮电大学 一种面向移动边缘网络系统的优化服务缓存及计算卸载方法
CN115297013A (zh) * 2022-08-04 2022-11-04 重庆大学 一种基于边缘协作的任务卸载及服务缓存联合优化方法
CN116137724A (zh) * 2023-02-20 2023-05-19 重庆邮电大学 一种基于移动边缘计算的任务卸载及资源分配方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YAO ZHIXIU; LI YUN; XIA SHICHAO; WU GUANGFU: "Attention Cooperative Task Offloading and Service Caching in Edge Computing", GLOBECOM 2022 - 2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE, IEEE, 4 December 2022 (2022-12-04), pages 5189 - 5194, XP034268202, DOI: 10.1109/GLOBECOM48099.2022.10001202 *

Also Published As

Publication number Publication date
CN116137724A (zh) 2023-05-19

Similar Documents

Publication Publication Date Title
WO2024174426A1 (fr) Procédé de délestage de tâche et d'attribution de ressources basé sur un calcul de périphérie mobile
Lin et al. Resource management for pervasive-edge-computing-assisted wireless VR streaming in industrial Internet of Things
CN111031102A (zh) 一种多用户、多任务的移动边缘计算系统中可缓存的任务迁移方法
CN114340016B (zh) 一种电网边缘计算卸载分配方法及系统
CN112689296B (zh) 一种异构IoT网络中的边缘计算与缓存方法及系统
Qin et al. Collaborative edge computing and caching in vehicular networks
CN113115368A (zh) 基于深度强化学习的基站缓存替换方法、系统及存储介质
CN116260871A (zh) 一种基于本地和边缘协同缓存的独立任务卸载方法
CN115344395B (zh) 面向异质任务泛化的边缘缓存调度、任务卸载方法和系统
Ai et al. Dynamic offloading strategy for delay-sensitive task in mobile-edge computing networks
CN114626298A (zh) 无人机辅助车联网中高效缓存和任务卸载的状态更新方法
CN116367231A (zh) 基于ddpg算法的边缘计算车联网资源管理联合优化方法
CN116233926A (zh) 一种基于移动边缘计算的任务卸载及服务缓存联合优化方法
CN116233927A (zh) 一种在移动边缘计算中负载感知的计算卸载节能优化方法
CN116489712B (zh) 一种基于深度强化学习的移动边缘计算任务卸载方法
CN116566838A (zh) 一种区块链与边缘计算协同的车联网任务卸载和内容缓存方法
CN114980039A (zh) D2d协作计算的mec系统中的随机任务调度和资源分配方法
Zhang et al. A deep reinforcement learning approach for online computation offloading in mobile edge computing
Ansere et al. Quantum deep reinforcement learning for dynamic resource allocation in mobile edge computing-based IoT systems
Lakew et al. Adaptive partial offloading and resource harmonization in wireless edge computing-assisted IoE networks
Zhang et al. Computation offloading and resource allocation in F-RANs: A federated deep reinforcement learning approach
CN116321293A (zh) 基于多智能体强化学习的边缘计算卸载和资源分配方法
Li et al. Dqn-based collaborative computation offloading for edge load balancing
CN117858109A (zh) 基于数字孪生的用户关联、任务卸载和资源分配优化方法
CN117354934A (zh) 一种多时隙mec系统双时间尺度任务卸载和资源分配方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23923594

Country of ref document: EP

Kind code of ref document: A1