[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113296953A - Distributed computing architecture, method and device of cloud side heterogeneous edge computing network - Google Patents

Distributed computing architecture, method and device of cloud side heterogeneous edge computing network Download PDF

Info

Publication number
CN113296953A
CN113296953A CN202110622828.0A CN202110622828A CN113296953A CN 113296953 A CN113296953 A CN 113296953A CN 202110622828 A CN202110622828 A CN 202110622828A CN 113296953 A CN113296953 A CN 113296953A
Authority
CN
China
Prior art keywords
edge
computing
cloud
model
energy consumption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110622828.0A
Other languages
Chinese (zh)
Other versions
CN113296953B (en
Inventor
宋令阳
王鹏飞
邸博雅
边凯归
程翔
孙绍辉
庹虎
崔斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202110622828.0A priority Critical patent/CN113296953B/en
Publication of CN113296953A publication Critical patent/CN113296953A/en
Application granted granted Critical
Publication of CN113296953B publication Critical patent/CN113296953B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/445Program loading or initiating
    • G06F9/44594Unloading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention relates to a distributed computing architecture, a method and a device of a cloud edge heterogeneous edge computing network, wherein the method comprises the following steps: determining an energy consumption model; the energy consumption model comprises: the system comprises an edge equipment model, an edge server model and a cloud computing center model; constructing an objective function based on the energy consumption model; determining a limiting condition; normalizing the objective function and the constraint; grouping the normalized target functions until each group only contains one target function and one corresponding variable; and updating each variable until convergence, so as to obtain the optimal calculation unloading proportion. The method can obtain the optimal network calculation unloading proportion and minimize the network energy consumption.

Description

Distributed computing architecture, method and device of cloud side heterogeneous edge computing network
Technical Field
The invention relates to the field of heterogeneous edge computing, in particular to a distributed computing architecture, a method and a device of a cloud edge heterogeneous edge computing network.
Background
In the heterogeneous edge computing (HetMEC) network in the prior art, cloud and multilayer edge computing are combined, so that computing tasks generated at edge equipment can be unloaded to servers at different layers for operation, computing pressure is reasonably dispersed, computing efficiency is improved, and system delay can be effectively minimized through combined optimization of computing unloading, computing resource allocation and transmission resource allocation. However, the solution does not consider the problem of system energy consumption, but energy consumption is very important in an edge computing network, especially for edge devices.
Most of the existing schemes only consider two layers of traditional edge computing networks, that is, only a single layer of edge servers and a bottom layer of edge devices, and reduce the energy consumption of the system for task execution aiming at two types of situations, namely a single server and a plurality of servers.
Most of the current schemes are centralized optimization algorithms, and in the scheme for the distributed network, games are usually used to coordinate decision behaviors of individual interests, so that the expandability is insufficient, and the method is not suitable for large-data-volume task processing of a large-scale network.
Disclosure of Invention
The invention aims to provide a distributed computing architecture, a method and a device of a cloud edge heterogeneous edge computing network, which can reduce the total energy consumption of a system.
In order to achieve the purpose, the invention provides the following scheme:
a distributed computing method of a cloud-edge heterogeneous edge computing network, the method comprising:
determining an energy consumption model; the energy consumption model comprises: the system comprises an edge equipment model, an edge server model and a cloud computing center model;
constructing an objective function based on the energy consumption model;
determining a limiting condition;
normalizing the objective function and the constraint;
grouping the normalized target functions until each group only contains one target function and one corresponding variable;
and updating each variable until convergence, so as to obtain the optimal calculation unloading proportion.
Optionally, after normalizing the objective function and the constraint condition, the method further includes:
and converting the normalized target function and the limiting conditions into a vector form.
Optionally, the expression of the edge device model is as follows:
Figure BDA0003100641990000021
wherein,
Figure BDA0003100641990000022
energy consumption of edge devices, kbFor effective switched capacitance determined by the edge device chip architecture,
Figure BDA0003100641990000023
the calculation resource for the edge device to participate in the calculation, mu is the calculation resource needed by the unit bit data, and lambdaiIs the original data size at the edge device i,
Figure BDA0003100641990000024
calculating the proportion of the original calculation data of the edge device i in the edge device layer;
the expression of the edge server model is as follows:
Figure BDA0003100641990000025
wherein,
Figure BDA0003100641990000026
for the energy consumption of the edge server, kmFor effective switched capacitance determined by the edge server hardware architecture,
Figure BDA0003100641990000027
the calculation resource for the edge server to participate in the calculation, mu is the calculation resource needed by the unit bit data, NjSet of edge devices, λ, to which edge server j is connectediIs the amount of raw data at the edge device i,
Figure BDA0003100641990000028
calculating the proportion of the original calculation data of the edge device i in an edge server layer;
the expression of the cloud computing center model is as follows:
Figure BDA0003100641990000029
wherein E isCCEnergy consumption of cloud computing center, ktFor effective switched capacitance, θ, determined by the cloud computing center hardware architecturetThe method comprises the steps that calculation resources for the cloud center server to participate in calculation are provided, mu is calculation resources required by unit bit data, N is the number of all edge devices, and lambda isiIs the original data size at the edge device i,
Figure BDA00031006419900000210
the ratio calculated at the edge server for the raw calculation data of the edge device i,
Figure BDA00031006419900000211
the ratio calculated at the edge device level for the raw calculation data of the edge device i.
Optionally, the expression of the objective function is as follows:
Figure BDA0003100641990000031
wherein,
Figure BDA0003100641990000032
in order to be able to consume energy from the edge devices,
Figure BDA0003100641990000033
for power consumption of edge servers, ECCFor energy consumption of the cloud computing center, N is the number of edge devices, and M is the number of edge servers.
The expression of the constraint is as follows:
Figure BDA0003100641990000034
wherein Ψ is the total transmission resource amount, λ, of the cloud computing centeriIs the original data size at the edge device i,
Figure BDA0003100641990000035
the ratio calculated at the edge server for the raw calculation data of the edge device i,
Figure BDA0003100641990000036
and calculating the proportion of the original calculation data of the edge device i in the edge device layer, wherein N is the number of the edge devices.
Optionally, the expressions of the normalized objective function and the constraint condition are as follows:
Figure BDA0003100641990000037
Figure BDA0003100641990000038
Figure BDA0003100641990000039
Figure BDA00031006419900000310
wherein k isbFor effective switched capacitance, k, determined by the edge device chip architecturetFor efficient switching of capacitance, k, determined by the cloud computing center hardware architecturemFor effective switched capacitance determined by the edge server hardware architecture,
Figure BDA00031006419900000311
the ratio calculated at the edge server for the raw calculation data of the edge device i,
Figure BDA00031006419900000312
the ratio, θ, calculated at the edge device level for the raw calculation data of the edge device itComputing resources that participate in computing for the cloud-centric server,
Figure BDA00031006419900000313
for the computing resources of the edge device to participate in the computation,
Figure BDA00031006419900000314
the computing resource for the edge server to participate in the computation, mu is the computing resource needed by the unit bit data, and lambdaiIs the raw data volume at the edge device i, and x is the proportion of the task offload at the device somewhere in a certain layer.
Optionally, the following formula is specifically adopted to convert the normalized objective function and the constraint condition into a vector form:
the augmented Lagrangian function is in the form:
Figure BDA0003100641990000041
wherein, the matrix
Figure BDA0003100641990000042
λiAs the amount of raw data at the edge device i, the vector
Figure BDA0003100641990000043
Task offload proportion, vector, at edge device for all computing tasks
Figure BDA0003100641990000044
The proportion of the task offload at the edge server for all computing tasks,
Figure BDA0003100641990000045
the total computation amount of unloading for the edge device and the edge server, psi is the total transmission resource amount of the cloud computing center, LρFor a weighted total cost of all computing devices in the network, ξ ═ ξ1,…,ξN]Is the shadow price of the transmission resource.
Optionally, each variable is updated by specifically using the following formula:
Figure BDA0003100641990000046
Figure BDA0003100641990000047
λ(k+1)=λ(k)-ρ(Ax(k+1)+Ay(k+1)-b)
where the superscript k denotes the value of the variable in the kth iteration, τ1Transmission resource loss penalty, τ, for edge devices2And p is a cost factor, which is the loss cost of the transmission resource of the edge server.
The invention further provides a distributed computing architecture of a cloud-edge heterogeneous edge computing network, comprising:
the energy consumption model determining module is used for determining an energy consumption model; the energy consumption model comprises: the system comprises an edge equipment model, an edge server model and a cloud computing center model;
the target function building module is used for building a target function based on the energy consumption model;
a limiting condition determining module for determining a limiting condition;
a normalization module for normalizing the objective function and the constraint condition;
the grouping module is used for grouping the normalized target functions until each group only contains one target function and one corresponding variable;
and the updating module is used for updating each variable until convergence, so as to obtain the optimal calculation unloading proportion.
The present invention further provides a distributed computing apparatus of a cloud-edge heterogeneous edge computing network, including: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
determining an energy consumption model; the energy consumption model comprises: the system comprises an edge equipment model, an edge server model and a cloud computing center model;
constructing an objective function based on the energy consumption model;
determining a limiting condition;
normalizing the objective function and the constraint;
grouping the normalized target functions until each group only contains one target function and one corresponding variable;
and updating each variable until convergence, so as to obtain the optimal calculation unloading proportion.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the method of the invention decomposes a complex optimization problem into a plurality of simple sub-problems by using a distributed multi-block ADMM computing architecture, each sub-problem can be processed in a distributed way by different computing equipment, and the solution of large data amount computing tasks can be realized by parallel processing, so that the method can be easily expanded to ultra-dense and large-scale edge computing networks, and can process large-scale data computing tasks in a distributed way, and has strong expansibility and high energy utilization efficiency.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a schematic view of a task offloading scenario and a model of a cloud-edge heterogeneous edge computing network according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for determining an optimal network computation offload ratio according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an algorithm flow according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a distributed computing architecture, a method and a device of a cloud edge heterogeneous edge computing network, and the total energy consumption of a system is reduced.
Fig. 1 is a schematic view of a task offloading scenario and a model of a cloud edge heterogeneous edge computing network according to an embodiment of the present invention, and fig. 2 is a flowchart of a method for determining an offloading proportion of optimal network computing according to an embodiment of the present invention, as shown in fig. 1 and fig. 2, the method includes:
step 101: determining an energy consumption model; the energy consumption model comprises: an edge device model, an edge server model, and a cloud computing center model.
The expression of the edge device model is as follows:
Figure BDA0003100641990000061
wherein,
Figure BDA0003100641990000062
energy consumption of edge devices, kbFor effective switched capacitance determined by the edge device chip architecture,
Figure BDA0003100641990000063
the calculation resource for the edge device to participate in the calculation, mu is the calculation resource needed by the unit bit data, and lambdaiIs the original data size at the edge device i,
Figure BDA0003100641990000064
calculating the proportion of the original calculation data of the edge device i in the edge device layer;
the expression of the edge server model is as follows:
Figure BDA0003100641990000065
wherein,
Figure BDA0003100641990000066
for the energy consumption of the edge server, kmFor effective switched capacitance determined by the edge server hardware architecture,
Figure BDA0003100641990000067
the calculation resource for the edge server to participate in the calculation, mu is the calculation resource needed by the unit bit data, NjSet of edge devices, λ, to which edge server j is connectediIs the amount of raw data at the edge device i,
Figure BDA0003100641990000071
calculating the proportion of the original calculation data of the edge device i in an edge server layer;
the expression of the cloud computing center model is as follows:
Figure BDA0003100641990000072
wherein E isCCEnergy consumption of cloud computing center, ktFor effective switched capacitance, θ, determined by the cloud computing center hardware architecturetThe method comprises the steps that calculation resources for the cloud center server to participate in calculation are provided, mu is calculation resources required by unit bit data, N is the number of all edge devices, and lambda isiIs the original data size at the edge device i,
Figure BDA0003100641990000073
the ratio calculated at the edge server for the raw calculation data of the edge device i,
Figure BDA0003100641990000074
the ratio calculated at the edge device level for the raw calculation data of the edge device i.
Step 102: and constructing an objective function based on the energy consumption model.
The expression of the objective function is as follows:
Figure BDA0003100641990000075
wherein,
Figure BDA0003100641990000076
in order to be able to consume energy from the edge devices,
Figure BDA0003100641990000077
for power consumption of edge servers, ECCFor energy consumption of the cloud computing center, N is the number of edge devices, and M is the number of edge servers.
Step 103: the limiting conditions are determined.
Different tasks offload the upload limited to all available transmission bandwidths Ψ of the CCs, and the resource contention relationship during the upload process is described by the following equation:
the expression of the constraint is as follows:
Figure BDA0003100641990000078
wherein Ψ is the total transmission resource amount, λ, of the cloud computing centeriIs the original data size at the edge device i,
Figure BDA0003100641990000079
the ratio calculated at the edge server for the raw calculation data of the edge device i,
Figure BDA00031006419900000710
and calculating the proportion of the original calculation data of the edge device i in the edge device layer, wherein N is the number of the edge devices.
Step 104: normalizing the objective function and the limiting conditions to obtain the following optimization problems:
the expressions of the normalized objective function and the limiting conditions are as follows:
Figure BDA0003100641990000081
Figure BDA0003100641990000082
Figure BDA0003100641990000083
Figure BDA0003100641990000084
wherein k isbFor effective switched capacitance, k, determined by the edge device chip architecturetFor efficient switching of capacitance, k, determined by the cloud computing center hardware architecturemFor effective switched capacitance determined by the edge server hardware architecture,
Figure BDA0003100641990000085
the ratio calculated at the edge server for the raw calculation data of the edge device i,
Figure BDA0003100641990000086
the ratio, θ, calculated at the edge device level for the raw calculation data of the edge device itComputing resources that participate in computing for the cloud-centric server,
Figure BDA0003100641990000087
for the computing resources of the edge device to participate in the computation,
Figure BDA0003100641990000088
the computing resource for the edge server to participate in the computation, mu is the computing resource needed by the unit bit data, and lambdaiFor the original data amount at the edge device i, x is the task unloading proportion at a device at a certain layer, for example, x can be set to be
Figure BDA0003100641990000089
At this time x represents the proportion of the raw calculation data of the edge device i that is calculated at the edge server.
Step 105: and performing variable replacement on the normalized target function and the limiting conditions.
By performing variable replacement, the problem in the step 104 is converted into the following problem form, the optimization problem is solved in a distributed manner by using a multi-partition ADMM algorithm, and the optimization problem is corresponding to a cloud edge combined heterogeneous edge computing network, so that the network energy consumption is minimized.
Figure BDA00031006419900000810
Figure BDA00031006419900000811
Wherein N is N1+N2N is the number of all edge devices, and M is the number of edge servers.
Step 106: and grouping the target functions after the variables are replaced until each group only contains one target function and one corresponding variable.
Dividing N subfunctions and variables thereof in the objective function into two groups, one group comprises N1Sub-functions
Figure BDA0003100641990000091
And variables thereof
Figure BDA0003100641990000092
Another group comprising N2Sub-functions
Figure BDA0003100641990000093
And variables thereof
Figure BDA0003100641990000094
Wherein N is1+N2=N。
For simplicity, the above problem can be written in the form of the following vector:
the augmented Lagrangian function is in the form:
Figure BDA0003100641990000095
wherein, the matrix
Figure BDA0003100641990000096
λiAs the amount of raw data at the edge device i, the vector
Figure BDA0003100641990000097
Is at the edge for all computing tasksProportion, vector, of task unloading at edge device
Figure BDA0003100641990000098
The proportion of the task offload at the edge server for all computing tasks,
Figure BDA0003100641990000099
the total computation amount of unloading for the edge device and the edge server, psi is the total transmission resource amount of the cloud computing center, LρFor a weighted total cost of all computing devices in the network, ξ ═ ξ1,…,ξN]Is the shadow price of the transmission resource.
By analogy, each group of objective functions can be continuously divided into two groups until each group only contains one objective function and 1 corresponding variable.
Step 107: and updating each variable until convergence, so as to obtain the optimal calculation unloading proportion.
Thus, under the condition of ensuring convergence, an updated formula of each variable is obtained:
Figure BDA00031006419900000910
Figure BDA00031006419900000911
λ(k+1)=λ(k)-ρ(Ax(k+1)+Ay(k+1)-b)
where the superscript k denotes the value of the variable in the kth iteration, τ1Transmission resource loss penalty, τ, for edge devices2And p is a cost factor, which is the loss cost of the transmission resource of the edge server.
Wherein the updating of λ is done by the cloud computing center and the updating of (x, y) is done by the respective edge device and edge server. And (4) carrying out distributed parallel iterative updating on each variable by different equipment according to the iterative formula until convergence to obtain the optimal calculation unloading proportion.
The invention further provides a distributed computing architecture of a cloud-edge heterogeneous edge computing network, comprising:
the energy consumption model determining module is used for determining an energy consumption model; the energy consumption model comprises: the system comprises an edge equipment model, an edge server model and a cloud computing center model;
the target function building module is used for building a target function based on the energy consumption model;
a limiting condition determining module for determining a limiting condition;
a normalization module for normalizing the objective function and the constraint condition;
the grouping module is used for grouping the normalized target functions until each group only contains one target function and one corresponding variable;
and the updating module is used for updating each variable until convergence, so as to obtain the optimal calculation unloading proportion.
The invention also provides a distributed computing device of the cloud edge heterogeneous edge computing network, which comprises: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
determining an energy consumption model; the energy consumption model comprises: the system comprises an edge equipment model, an edge server model and a cloud computing center model;
constructing an objective function based on the energy consumption model;
determining a limiting condition;
normalizing the objective function and the constraint;
grouping the normalized target functions until each group only contains one target function and one corresponding variable;
and updating each variable until convergence, so as to obtain the optimal calculation unloading proportion.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (9)

1. A distributed computing method of a cloud-edge heterogeneous edge computing network, the method comprising:
determining an energy consumption model; the energy consumption model comprises: the system comprises an edge equipment model, an edge server model and a cloud computing center model;
constructing an objective function based on the energy consumption model;
determining a limiting condition;
normalizing the objective function and the constraint;
grouping the normalized target functions until each group only contains one target function and one corresponding variable;
and updating each variable until convergence, so as to obtain the optimal calculation unloading proportion.
2. The distributed computing method of the cloud-edge heterogeneous edge computing network of claim 1, further comprising, after normalizing the objective function and the constraint,:
and converting the normalized target function and the limiting conditions into a vector form.
3. The distributed computing method of the cloud-edge heterogeneous edge computing network of claim 1, wherein the expression of the edge device model is as follows:
Figure FDA0003100641980000011
wherein,
Figure FDA0003100641980000012
energy consumption of edge devices, kbFor effective switched capacitance determined by the edge device chip architecture,
Figure FDA0003100641980000013
the calculation resource for the edge device to participate in the calculation, mu is the calculation resource needed by the unit bit data, and lambdaiIs the original data size at the edge device i,
Figure FDA0003100641980000014
calculating the proportion of the original calculation data of the edge device i in the edge device layer;
the expression of the edge server model is as follows:
Figure FDA0003100641980000015
wherein,
Figure FDA0003100641980000016
for the energy consumption of the edge server, kmFor effective switched capacitance determined by the edge server hardware architecture,
Figure FDA0003100641980000017
the calculation resource for the edge server to participate in the calculation, mu is the calculation resource needed by the unit bit data, NjSet of edge devices, λ, to which edge server j is connectediIs the amount of raw data at the edge device i,
Figure FDA0003100641980000021
calculating the proportion of the original calculation data of the edge device i in an edge server layer;
the expression of the cloud computing center model is as follows:
Figure FDA0003100641980000022
wherein E isCCEnergy consumption of cloud computing center, ktFor effective switched capacitance, θ, determined by the cloud computing center hardware architecturetThe method comprises the steps that calculation resources for the cloud center server to participate in calculation are provided, mu is calculation resources required by unit bit data, N is the number of all edge devices, and lambda isiIs the original data size at the edge device i,
Figure FDA0003100641980000023
the ratio calculated at the edge server for the raw calculation data of the edge device i,
Figure FDA0003100641980000024
the ratio calculated at the edge device level for the raw calculation data of the edge device i.
4. The distributed computing method of the cloud-edge heterogeneous edge computing network according to claim 1, wherein the expression of the objective function is as follows:
Figure FDA0003100641980000025
wherein,
Figure FDA0003100641980000026
in order to be able to consume energy from the edge devices,
Figure FDA0003100641980000027
for power consumption of edge servers, ECCFor energy consumption of the cloud computing center, N is the number of edge devices, and M is the number of edge servers.
The expression of the constraint is as follows:
Figure FDA0003100641980000028
wherein Ψ is the total transmission resource amount, λ, of the cloud computing centeriIs the original data size at the edge device i,
Figure FDA0003100641980000029
the ratio calculated at the edge server for the raw calculation data of the edge device i,
Figure FDA00031006419800000210
and calculating the proportion of the original calculation data of the edge device i in the edge device layer, wherein N is the number of the edge devices.
5. The distributed computing method of the cloud-edge heterogeneous edge computing network according to claim 1, wherein the expressions of the normalized objective function and the constraint condition are as follows:
Figure FDA00031006419800000211
Figure FDA00031006419800000212
Figure FDA0003100641980000031
Figure FDA0003100641980000032
wherein k isbFor effective switched capacitance, k, determined by the edge device chip architecturetFor efficient switching of capacitance, k, determined by the cloud computing center hardware architecturemFor effective switched capacitance determined by the edge server hardware architecture,
Figure FDA0003100641980000033
the ratio calculated at the edge server for the raw calculation data of the edge device i,
Figure FDA0003100641980000034
the ratio, θ, calculated at the edge device level for the raw calculation data of the edge device itComputing resources that participate in computing for the cloud-centric server,
Figure FDA0003100641980000035
for the computing resources of the edge device to participate in the computation,
Figure FDA0003100641980000036
the computing resource for the edge server to participate in the computation, mu is the computing resource needed by the unit bit data, and lambdaiIs the raw data volume at the edge device i, and x is the proportion of the task offload at the device somewhere in a certain layer.
6. The distributed computing method of the cloud edge heterogeneous edge computing network according to claim 2, wherein the conversion of the normalized objective function and the constraint condition into a vector form specifically adopts the following formula:
the augmented Lagrangian function is in the form:
Figure FDA0003100641980000037
wherein, the matrix
Figure FDA0003100641980000038
λiAs the amount of raw data at the edge device i, the vector
Figure FDA0003100641980000039
Task offload proportion, vector, at edge device for all computing tasks
Figure FDA00031006419800000310
The proportion of the task offload at the edge server for all computing tasks,
Figure FDA00031006419800000311
the total computation amount of unloading for the edge device and the edge server, psi is the total transmission resource amount of the cloud computing center, LρFor a weighted total cost of all computing devices in the network, ξ ═ ξ1,…,ξN]Is the shadow price of the transmission resource.
7. The distributed computing method of the cloud-edge heterogeneous edge computing network according to claim 1, wherein each variable is updated specifically by using the following formula:
Figure FDA0003100641980000041
Figure FDA0003100641980000042
λ(k+1)=λ(k)-ρ(Ax(k+1)+Ay(k+1)-b)
where the superscript k denotes the value of the variable in the kth iteration, τ1Transmission resource loss penalty, τ, for edge devices2And p is a cost factor, which is the loss cost of the transmission resource of the edge server.
8. A distributed computing architecture for a cloud-edge heterogeneous edge computing network, the distributed computing architecture comprising:
the energy consumption model determining module is used for determining an energy consumption model; the energy consumption model comprises: the system comprises an edge equipment model, an edge server model and a cloud computing center model;
the target function building module is used for building a target function based on the energy consumption model;
a limiting condition determining module for determining a limiting condition;
a normalization module for normalizing the objective function and the constraint condition;
the grouping module is used for grouping the normalized target functions until each group only contains one target function and one corresponding variable;
and the updating module is used for updating each variable until convergence, so as to obtain the optimal calculation unloading proportion.
9. A distributed computing device of a cloud-edge heterogeneous edge computing network, the distributed computing device of the cloud-edge heterogeneous edge computing network comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
determining an energy consumption model; the energy consumption model comprises: the system comprises an edge equipment model, an edge server model and a cloud computing center model;
constructing an objective function based on the energy consumption model;
determining a limiting condition;
normalizing the objective function and the constraint;
grouping the normalized target functions until each group only contains one target function and one corresponding variable;
and updating each variable until convergence, so as to obtain the optimal calculation unloading proportion.
CN202110622828.0A 2021-06-04 2021-06-04 Distributed computing architecture, method and device of cloud side heterogeneous edge computing network Active CN113296953B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110622828.0A CN113296953B (en) 2021-06-04 2021-06-04 Distributed computing architecture, method and device of cloud side heterogeneous edge computing network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110622828.0A CN113296953B (en) 2021-06-04 2021-06-04 Distributed computing architecture, method and device of cloud side heterogeneous edge computing network

Publications (2)

Publication Number Publication Date
CN113296953A true CN113296953A (en) 2021-08-24
CN113296953B CN113296953B (en) 2022-02-15

Family

ID=77327188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110622828.0A Active CN113296953B (en) 2021-06-04 2021-06-04 Distributed computing architecture, method and device of cloud side heterogeneous edge computing network

Country Status (1)

Country Link
CN (1) CN113296953B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117891591A (en) * 2023-12-04 2024-04-16 国网河北省电力有限公司信息通信分公司 Task unloading method and device based on edge calculation and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109121151A (en) * 2018-11-01 2019-01-01 南京邮电大学 Distributed discharging method under the integrated mobile edge calculations of cellulor
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 Method for unloading computing tasks based on edge computing and cloud computing cooperation
CN111245651A (en) * 2020-01-08 2020-06-05 上海交通大学 Task unloading method based on power control and resource allocation
CN111447619A (en) * 2020-03-12 2020-07-24 重庆邮电大学 Joint task unloading and resource allocation method in mobile edge computing network
CN112286677A (en) * 2020-08-11 2021-01-29 安阳师范学院 Resource-constrained edge cloud-oriented Internet of things application optimization deployment method
CN112667406A (en) * 2021-01-10 2021-04-16 中南林业科技大学 Task unloading and data caching method in cloud edge fusion heterogeneous network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109121151A (en) * 2018-11-01 2019-01-01 南京邮电大学 Distributed discharging method under the integrated mobile edge calculations of cellulor
CN109684075A (en) * 2018-11-28 2019-04-26 深圳供电局有限公司 Method for unloading computing tasks based on edge computing and cloud computing cooperation
CN111245651A (en) * 2020-01-08 2020-06-05 上海交通大学 Task unloading method based on power control and resource allocation
CN111447619A (en) * 2020-03-12 2020-07-24 重庆邮电大学 Joint task unloading and resource allocation method in mobile edge computing network
CN112286677A (en) * 2020-08-11 2021-01-29 安阳师范学院 Resource-constrained edge cloud-oriented Internet of things application optimization deployment method
CN112667406A (en) * 2021-01-10 2021-04-16 中南林业科技大学 Task unloading and data caching method in cloud edge fusion heterogeneous network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117891591A (en) * 2023-12-04 2024-04-16 国网河北省电力有限公司信息通信分公司 Task unloading method and device based on edge calculation and electronic equipment

Also Published As

Publication number Publication date
CN113296953B (en) 2022-02-15

Similar Documents

Publication Publication Date Title
WO2022027776A1 (en) Edge computing network task scheduling and resource allocation method and edge computing system
CN110928654A (en) Distributed online task unloading scheduling method in edge computing system
CN111913723A (en) Cloud-edge-end cooperative unloading method and system based on assembly line
CN113259469B (en) Edge server deployment method, system and storage medium in intelligent manufacturing
CN110850957B (en) Scheduling method for reducing system power consumption through dormancy in edge computing scene
Tang et al. Research on heterogeneous computation resource allocation based on data-driven method
US20220309609A1 (en) Graph sampling and random walk acceleration method and system on GPU
CN112214301A (en) Smart city-oriented dynamic calculation migration method and device based on user preference
Miao et al. Adaptive DNN partition in edge computing environments
CN113296953B (en) Distributed computing architecture, method and device of cloud side heterogeneous edge computing network
Xue et al. A study of task scheduling based on differential evolution algorithm in cloud computing
CN113159539B (en) Method for combining green energy scheduling and dynamic task allocation in multi-layer edge computing system
CN114936708A (en) Fault diagnosis optimization method based on edge cloud collaborative task unloading and electronic equipment
Zhan et al. Field programmable gate array‐based all‐layer accelerator with quantization neural networks for sustainable cyber‐physical systems
CN113342504A (en) Intelligent manufacturing edge calculation task scheduling method and system based on cache
Chen et al. Energy and Time-Aware Inference Offloading for DNN-based Applications in LEO Satellites
Loukopoulos et al. A pareto-efficient algorithm for data stream processing at network edges
CN114301911B (en) Task management method and system based on edge-to-edge coordination
CN113747500B (en) High-energy-efficiency low-delay workflow application migration method based on generation of countermeasure network in complex heterogeneous mobile edge calculation
CN114997400A (en) Neural network acceleration reasoning method
Al Sallami Load balancing in green cloud computation
Zhou et al. Optimizing CNNs Throughput on Bandwidth-Constrained Distributed Multi-FPGA Architectures
Liu et al. Task scheduling model of edge computing for AI flow computing in Internet of Things
CN114153528B (en) Calculation unloading method for optimal positions of unmanned aerial vehicles for parallel task hovering time allocation
CN105512087B (en) Reliability evaluation method of resource-constrained multi-node computing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant