[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118277087B - Resource scheduling policy determination method, medium, electronic device and program product - Google Patents

Resource scheduling policy determination method, medium, electronic device and program product Download PDF

Info

Publication number
CN118277087B
CN118277087B CN202410347271.8A CN202410347271A CN118277087B CN 118277087 B CN118277087 B CN 118277087B CN 202410347271 A CN202410347271 A CN 202410347271A CN 118277087 B CN118277087 B CN 118277087B
Authority
CN
China
Prior art keywords
task
execution time
resource scheduling
workload
average
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410347271.8A
Other languages
Chinese (zh)
Other versions
CN118277087A (en
Inventor
张晓琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Communication Design Institute Co ltd
Original Assignee
Chongqing Communication Design Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Communication Design Institute Co ltd filed Critical Chongqing Communication Design Institute Co ltd
Priority to CN202410347271.8A priority Critical patent/CN118277087B/en
Publication of CN118277087A publication Critical patent/CN118277087A/en
Application granted granted Critical
Publication of CN118277087B publication Critical patent/CN118277087B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Multi Processors (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a resource scheduling policy determining method, which is used for correspondingly determining a resource scheduling policy according to variable workload of a task flow. The method is based on the load average value of the task flows with variable loads, and performs two-step screening on all available resource scheduling strategies to obtain a preferred resource scheduling strategy, so that when the resource scheduling strategy is determined for each task flow, the method can only search from the preferred resource scheduling strategy. The invention solves the technical problem of the decline of scheduling performance caused by the abundant existing resource scheduling strategies, and has the technical effect of improving the scheduling performance.

Description

Resource scheduling policy determination method, medium, electronic device and program product
Technical Field
The invention relates to the field of cloud computing, in particular to a method for determining a resource scheduling strategy for a task flow of a variable workload.
Background
With the rapid development of cloud computing technology, cloud resource scheduling has become an important research direction in the field of cloud computing. The purpose of cloud resource scheduling is to allocate resources in a cloud computing platform to different users and applications to meet their needs. In cloud computing platforms, resource scheduling requires consideration of a number of factors, such as the needs of users, availability of resources, costs, and the like. The cloud computing resource scheduling algorithm optimizes the efficiency and performance of cloud computing by allocating resources to different tasks. In cloud computing, resources may include processors, memory, network bandwidth, etc., and tasks may be data processing, computing, storage, etc. With the increase of transactions to be processed in different application fields, the application of cloud computing services becomes an increasingly universal solution, and the large number of tasks to be processed and how to allocate resources by adopting an effective scheduling algorithm become one of the main problems faced by cloud computing. Moreover, in cloud resource scheduling, how to allocate different resources to different tasks, that is, how to guarantee the task scheduling sequence and ensure that the cloud resources are used maximally under the minimum delay requirement, and mapping of a bottom host to a virtual machine and mapping of the virtual machine to a specific task are also very difficult. The success or failure of cloud resource scheduling directly affects the overall performance and efficiency of the cloud computing system, and therefore has very important significance.
At present, the cloud resource scheduling method is mainly divided into two types of static resource scheduling and dynamic resource scheduling. The static resource scheduling is to allocate resources in advance according to user demands and resource supply, and generally allocates resources according to task properties and priorities, but cannot be adjusted in real time according to system loads, which may cause problems of unbalanced resource utilization, task delay and the like. The dynamic resource scheduling is to dynamically adjust the resource allocation according to the real-time system load condition, so that the resource allocation and release can be performed according to the real-time requirements of the tasks, and the resource scheduling has better resource utilization efficiency and flexibility. In addition, in the cloud resource scheduling estimation, the Monte Carlo method can obtain more accurate performance estimation results in a shorter time through simulating system behaviors, long-time waiting and observation in an actual system are avoided, and accordingly a resource allocation strategy is estimated and optimized, and performance and efficiency of a cloud computing system are improved. In addition, the blind selection method is to allocate resources based on a certain rule or algorithm, and in the use of the rule or algorithm, various factors need to be considered, such as the requirement of a user, the availability of resources, the cost and the like. However, monte carlo simulation requires a long time to obtain an optimal scheduling policy, is subject to external environmental fluctuations, and the blind selection policy increases with the search space, resulting in a decrease in scheduling performance.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a resource scheduling strategy determining method which is used for correspondingly determining a resource scheduling strategy according to the variable workload of a task flow, and solves the problem of scheduling performance reduction caused by overlarge search space of the resource scheduling strategy in the prior art.
According to a first aspect of the present invention, there is provided a resource scheduling policy determining method, configured to determine a resource scheduling policy accordingly according to a variable workload of a task flow: s1, acquiring all available resource scheduling strategies; s2, calculating the average workload of the task flows, and selecting a corresponding average workload task flow; s3, respectively calculating the execution time spent by the data center for executing the average workload task flow under all available resource scheduling strategies; s4, calculating average execution time based on all the obtained execution time, and selecting a resource scheduling strategy corresponding to the execution time smaller than the average execution time as a first candidate strategy set; s5, comparing the execution time spent by adjacent resource scheduling strategies according to the sequence of the resource scheduling strategies, and selecting the resource scheduling strategy corresponding to the smaller execution time as a second candidate strategy set; s6, taking an intersection set of the first candidate strategy set and the second candidate strategy set as a final candidate strategy set; and S7, for task flows under different workloads, calculating the execution time required by each candidate strategy in the final candidate strategy set respectively, and selecting the candidate strategy corresponding to the shortest execution time as the corresponding execution strategy.
According to an embodiment of the present invention, in the step S3, the execution time is calculated according to the following rule: s31, mapping the data center to the virtual machine according to the adopted resource scheduling strategy; s32, distributing all tasks in the average workload task flow to the virtual machine; s33, calculating the time when the virtual machine performs all tasks as the execution time.
According to an embodiment of the present invention, the execution time is:
Wherein,
ETtime(ts,vs)=W(ts)/C(vs)
Wherein ET total is the total execution time spent under the current scheduling policy, t s represents any task, v s and v i represent any virtual machine, W represents a set of tasks in the task flow, VM represents a set of virtual machines, W (t s) represents the workload of any task, C (v s) represents the processing capacity of any virtual machine, and ET time(ts,vs) represents the execution time required for any task to be executed by any virtual machine.
According to the embodiment of the invention, the method further comprises the step of carrying out task merging on the task streams according to the dependency relationship among the tasks before the task distribution.
According to an embodiment of the invention, task merging is performed according to the following rules: p1: judging whether each current task in the task stream has a unique subtask or not; p2: if the unique subtask exists, further judging whether the unique subtask has a unique father task and is a current task; and P3: if the unique parent task exists and is the current task, merging the current task with the unique child task, and correspondingly updating the dependency relationship of the merged task.
According to an embodiment of the invention, the method further comprises constructing a linear relationship of execution time spent by the task flow with variable workload under the final candidate policy set.
According to an embodiment of the present invention, the linear relationship is constructed according to the following rule: t1, calculating an average value M x and a standard deviation S x of the variable working load; t2, calculating an average value M y of average completion time spent by all scheduling strategies in the final candidate strategy set aiming at different workloads; t3, calculating the standard deviation S y of the average completion time spent by all scheduling strategies in the final candidate strategy set aiming at different workloads; t4, calculating the linear relation between the variable working load and the execution time as
Y=bX+A
Wherein,
A=My-bMx
Wherein Y represents execution time, and X represents any one of the workload.
In a second aspect, according to an embodiment of the present invention, there is provided a computer program product comprising computer program code which, when run on an electronic device, causes the electronic device to perform the method according to any of the first aspects.
In a third aspect, according to an embodiment of the present invention, there is provided a computer readable storage medium having stored thereon a computer program executable by a processor to implement the steps of the method of any of the first aspects.
In a fourth aspect, according to an embodiment of the present invention, there is provided an electronic apparatus including: one or more processors; and a memory, wherein the memory is for storing executable instructions; the one or more processors are configured via the executable instructions to implement the steps of the method of any one of the first aspects.
The technical principle of the invention is as follows: based on the load average of the task flows with variable load, two-step screening is performed on all available resource scheduling policies to obtain a preferred resource scheduling policy, so that when determining the resource scheduling policy for each task flow, only the preferred resource scheduling policy can be searched.
Compared with the prior art, the invention has the following beneficial effects: the resource scheduling strategy screening method based on the task flow with the average load value is adopted, so that the technical problem of scheduling performance degradation caused by the fact that the existing resource scheduling strategy is rich is solved, and the technical effect of improving the scheduling performance is achieved.
Drawings
FIG. 1 is a flow chart of a method for determining a resource scheduling policy for task flows of variable workload, respectively, in accordance with an embodiment of the present invention;
FIG. 2 is a bar graph of execution time required for different workloads under different ones of the final candidate scheduling policies in accordance with an embodiment of the present invention; and
FIG. 3 is a scatter plot of the required execution time and a linear plot of load and execution time for different scheduling policies in the final candidate scheduling policy for different workloads in accordance with an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings and examples.
As described in the background, existing resource scheduling strategies are increasing, resulting in more time required for policy evaluation and selection of variable load task flows, which results in reduced scheduling performance. In order to solve the above-mentioned problems, a resource scheduling policy determining method is provided for determining a resource scheduling policy accordingly according to a variable workload of a task flow, and by adopting a resource scheduling policy screening rule based on the task flow having an average load value, a set of resource scheduling policies to be selected is reduced, and a time for selecting a resource scheduling policy for the task flow is shortened. In summary, as shown in fig. 1, the method comprises: s1, acquiring all available resource scheduling strategies; s2, calculating the average workload of the task flows, and selecting a corresponding average workload task flow; s3, respectively calculating the execution time spent by the data center for executing the average workload task flow under all available resource scheduling strategies; s4, calculating average execution time based on all the obtained execution time, and selecting a resource scheduling strategy corresponding to the execution time smaller than the average execution time as a first candidate strategy set; s5, comparing the execution time spent by adjacent resource scheduling strategies according to the sequence of the resource scheduling strategies, and selecting the resource scheduling strategy corresponding to the smaller execution time as a second candidate strategy set; s6, taking an intersection set of the first candidate strategy set and the second candidate strategy set as a final candidate strategy set; and S7, for task flows under different workloads, calculating the execution time required by each candidate strategy in the final candidate strategy set respectively, and selecting the candidate strategy corresponding to the shortest execution time as the corresponding execution strategy. By performing policy selection only in the final candidate policy set that is screened out, policy evaluation time is reduced, thereby improving scheduling performance.
In detail, assume that in the search space, the set of scheduling policies available for searching is U, and θ is any one scheduling policy in the set U. And calculating the average workload of the task flows, and selecting a corresponding average workload task flow. For this average workload task flow, the execution time Makespan [ θ ] for each scheduling policy θ is calculated. G strategies in the set U are selected through preemptive scheduling strategies, and the following operations are performed: and comparing the Makespan [ theta ] with the average AVERAGEMAKESPAN of all the scheduling strategies Makespan [ theta ], and if the Makespan [ theta ] < AVERAGEMAKESPAN, adding the scheduling strategy theta into the set G until all the scheduling strategies are executed. In addition, s strategies in the set U are selected through a greedy algorithm, and the following operations are performed: if Makespan [ θ ] < Makespan [ θ+1], then add the scheduling policy θ to the set S; otherwise, adding the scheduling strategies theta+1 to the set S until all the scheduling strategies are executed, wherein the step value of theta is 2, that is, each time the assignment operation of theta is executed, the value of theta=theta+2. The optimal scheduling policy set K, i.e., k=g n S, is selected as the final resource scheduling policy set. Then, for a task flow with a variable workload, the execution time that each candidate policy in the set K needs to spend is calculated separately, and the candidate policy corresponding to the shortest execution time is selected as the final execution policy.
According to an embodiment of the present invention, in the step S3, the execution time is calculated according to the following rule: for each scheduling policy θ, mapping to virtual machine VM by data center DATACENTER, i.e.
Datacenter[i]=VMnum[rand()%n-1]+1
Where n is the number of tasks, VMnum represents any one virtual machine. As shown in fig. 2, there are 5 data centers and 20 virtual machines, and the 5 data centers are mapped to the 20 virtual machines using a time scheduling policy.
Furthermore, task set Cloudset is distributed to virtual machine VM, i.e
VM[i]=Cloudset[rand()%n-1]+1
Wherein Cloudset represents any one task. With continued reference to FIG. 2, n tasks are distributed to 20 virtual machines using a spatial scheduling policy.
Assuming that task t s is allocated to virtual machine v s, the execution time of task t s is mainly determined by the workload W (t s) of task t s and the processing capability C (v s) of virtual machine v s, and then the execution time of task t s can be expressed as:
ETtime(ts,vs)=W(ts)/C(vs)
Based on the above formula, the overall execution time of the average workload workflow under a certain scheduling policy can be obtained as follows:
Wherein ET total is the total execution time spent under the current scheduling policy, t s represents any task, v s and v i represent any virtual machine, W represents a set of tasks in the task flow, VM represents a set of virtual machines, W (t s) represents the workload of any task, C (v s) represents the processing capacity of any virtual machine, and ET time(ts,vs) represents the execution time required for any task to be executed by any virtual machine.
In order to reduce communication overhead among different instances in the workflow scheduling process, according to the embodiment of the invention, the method further comprises task merging for the task flows according to the dependency relationship among the tasks before the task distribution.
According to an embodiment of the invention, task merging is performed according to the following rules: p1: judging whether each current task in the task stream has a unique subtask or not; p2: if the unique subtask exists, further judging whether the unique subtask has a unique father task and is a current task; and P3: if the unique parent task exists and is the current task, merging the current task with the unique child task, and correspondingly updating the dependency relationship of the merged task.
In detail, the task merge and workflow update operations are as follows:
Initializing a workflow: forming a workflow w by m initial tasks;
Filling the workflow: adding a work flow inlet t entry and a work flow outlet t exit to the work flow w;
generating a task sequence: generating a task sequence through a work flow inlet t entry;
Merging tasks: for each task t i in the task sequence, determining whether t i has a unique subtask t s; if there is a unique subtask t s, it is further determined whether the parent task of task t s is unique and t i, and if t s has a unique parent task t i, the workload of task t s, i.e., W (t s)=W(ti)+W(ts), is updated. At the same time, updating the parent task set of task t s and the child task set of the parent task of t i;
Updating the task sequence: based on the merge task operation, task t i is removed from the task sequence;
updating the workflow: and outputting the workflow w' composed of the combined tasks.
In order to study the relationship of variable workload to corresponding execution time under a final candidate policy set, according to an embodiment of the invention, the method further comprises constructing a linear relationship of execution time spent by the task flow with variable workload under the final candidate policy set.
The construction of the linear relationship operates as follows:
initializing a specific Workload, wherein workbench [4] = rand ()% [1000-200] +1;
Calculating the execution time required by different workloads under each scheduling strategy, namely Makespan [ θ ] = executiontime (θ), wherein θ is any one strategy in the optimal scheduling strategy set K;
Calculating an average value M x of variable workload in a cloud environment;
Calculating standard deviation S x of variable working load in cloud environment;
calculating an average value M y of the average completion time of the scheduling strategies in the set K aiming at a plurality of different workloads;
calculating a standard deviation S y of average completion time of a scheduling strategy in the set K aiming at a plurality of different workloads;
The variable workload and execution time correlation is calculated as follows:
Y=bX+A
Wherein,
A=My-bMx
Wherein Y represents execution time, and X represents any one of the workload.
To verify the proposed method, we performed the following experiment.
The specific experimental environment of the invention is as follows: the processor is i7-4980, the frequency is 2.8Ghz,16GB memory. Meanwhile, cloudSim is adopted as a simulation tool to verify the set of optimal scheduling strategies in the candidate strategy set. The task flow with average workload of 325 is initialized. Initializing 30 candidate scheduling strategies, namely U= { theta 12...,θ30 }, and calculating to obtain the required execution time under each candidate scheduling strategy. Furthermore, the preemptive scheduling policy derivation set G={θ31619251130186262423791713210}, obtains the scheduling policy set S={θ123458111316181924262930}, by greedy algorithm G∩S={θ231113161819242630}. and initializes four different workload types, 250, 300, 350, and 400, respectively. Different workloads are processed using the G n S scheduling policy, and execution times of the different workloads under the optimal scheduling policy set are recorded, as shown in fig. 2.
Meanwhile, in order to verify the correlation of the execution time of different workloads under different scheduling strategies, the average value and the standard deviation of the execution time required by different scheduling strategies under different workloads are counted. The average value M x of the plurality of different workloads in the cloud computing environment is 325, the average value M y of the average completion time of the plurality of different workloads is 576.6, the standard deviation S x of the plurality of different workloads in the cloud computing environment is 64.55, the standard deviation S y of the average completion time of the plurality of different workloads is 95.60, and based on this, the relationship y=1.48x+95.6 between the different workloads and the execution time is obtained, as shown in fig. 3.
The cloud resource scheduling strategy determining method is verified by integrating the experiments. On one hand, a subset smaller than the candidate set is obtained by using a preemptive scheduling strategy and a greedy algorithm, and set intersection operation is carried out on the basis of the subset, so that an optimal scheduling strategy set is obtained, and the search space is reduced. On the other hand, the execution time of the work is tested under different work load scenes, so that the correlation between different work loads and the execution time is obtained, and the help and support are provided for a follow-up cloud resource scheduling algorithm.
Finally, it is noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made thereto without departing from the spirit and scope of the technical solution of the present invention, which is intended to be covered by the scope of the claims of the present invention.

Claims (8)

1. A method for determining a resource scheduling policy, configured to determine the resource scheduling policy accordingly according to a variable workload of a task flow, wherein:
S1, acquiring all available resource scheduling strategies;
S2, calculating the average workload of the task flows, and selecting a corresponding average workload task flow;
s3, respectively calculating the execution time spent by the data center for executing the average workload task flow under all available resource scheduling strategies;
s4, calculating average execution time based on all the obtained execution time, and selecting a resource scheduling strategy corresponding to the execution time smaller than the average execution time as a first candidate strategy set;
S5, comparing the execution time spent by adjacent resource scheduling strategies according to the sequence of the resource scheduling strategies, and selecting the resource scheduling strategy corresponding to the smaller execution time as a second candidate strategy set;
S6, taking an intersection set of the first candidate strategy set and the second candidate strategy set as a final candidate strategy set;
S7, for task flows under different workloads, respectively calculating the execution time required by each candidate strategy in the final candidate strategy set, and selecting the candidate strategy corresponding to the shortest execution time as the corresponding execution strategy;
the resource scheduling policy determining method further comprises the steps of constructing a linear relation of execution time spent by the task flow with the variable workload under the final candidate policy set;
wherein the linear relationship is constructed according to the following rule:
t1, calculating the average value of the variable workload And standard deviation
T2, calculating the average value of the average completion time spent by all scheduling strategies in the final candidate strategy set for different workloads
T3, calculating standard deviation of average completion time spent by all scheduling strategies in the final candidate strategy set for different workloads
T4, calculating the linear relation between the variable working load and the execution time as
Wherein,
Wherein, The execution time is indicated as such,Representing any workload.
2. The method according to claim 1, characterized in that in said step S3, the execution time is calculated according to the following rules:
s31, mapping the data center to the virtual machine according to the adopted resource scheduling strategy;
s32, distributing all tasks in the average workload task flow to the virtual machine;
S33, calculating the time when the virtual machine performs all tasks as the execution time.
3. The method of claim 2, wherein the execution time is:
,
Wherein,
Wherein, For the total execution time spent under the current scheduling policy,Which represents any one of the tasks to be performed,AndWhich represents the state of any one of the virtual machines,Representing a set of tasks in a task stream,Representing a set of virtual machines,Representing the workload of any one of the tasks,Representing the processing power of any one virtual machine,Representing the execution time required for any task to be executed by any virtual machine.
4. The method of claim 2, further comprising, prior to the task distribution, task merging task flows according to dependencies between tasks.
5. The method of claim 4, wherein task merging is performed according to the following rules:
P1: judging whether each current task in the task stream has a unique subtask or not;
p2: if the unique subtask exists, further judging whether the unique subtask has a unique father task and is a current task; and
P3: if the unique parent task exists and is the current task, merging the current task with the unique child task, and correspondingly updating the dependency relationship of the merged task.
6. A computer program product, characterized in that the computer program product comprises computer program code which, when run on an electronic device, causes the electronic device to perform the method according to any of claims 1-5.
7. A computer readable storage medium, having stored thereon a computer program executable by a processor to implement the steps of the method of any of claims 1-5.
8. An electronic device, comprising:
one or more processors; and
A memory, wherein the memory is for storing executable instructions;
one or more processors are configured via the executable instructions to implement the steps of the method of any one of claims 1-5.
CN202410347271.8A 2024-03-26 2024-03-26 Resource scheduling policy determination method, medium, electronic device and program product Active CN118277087B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410347271.8A CN118277087B (en) 2024-03-26 2024-03-26 Resource scheduling policy determination method, medium, electronic device and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410347271.8A CN118277087B (en) 2024-03-26 2024-03-26 Resource scheduling policy determination method, medium, electronic device and program product

Publications (2)

Publication Number Publication Date
CN118277087A CN118277087A (en) 2024-07-02
CN118277087B true CN118277087B (en) 2024-09-20

Family

ID=91648150

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410347271.8A Active CN118277087B (en) 2024-03-26 2024-03-26 Resource scheduling policy determination method, medium, electronic device and program product

Country Status (1)

Country Link
CN (1) CN118277087B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005503A (en) * 2015-07-26 2015-10-28 孙凌宇 Cellular automaton based cloud computing load balancing task scheduling method
CN105159762A (en) * 2015-08-03 2015-12-16 冷明 Greedy strategy based heuristic cloud computing task scheduling method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8869165B2 (en) * 2008-03-20 2014-10-21 International Business Machines Corporation Integrating flow orchestration and scheduling of jobs and data activities for a batch of workflows over multiple domains subject to constraints
CN111240818B (en) * 2020-01-09 2023-08-08 黔南民族师范学院 Task scheduling energy-saving method in heterogeneous GPU heterogeneous system environment
CN112380016A (en) * 2020-11-30 2021-02-19 华南理工大学 Cloud computing resource load balancing scheduling method based on improved genetic algorithm and application
CN114780219A (en) * 2022-04-21 2022-07-22 湘潭大学 Intelligent building task scheduling method based on edge calculation and three-branch decision
CN117724811A (en) * 2023-11-06 2024-03-19 普华基础软件股份有限公司 Hierarchical multi-core real-time scheduler

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005503A (en) * 2015-07-26 2015-10-28 孙凌宇 Cellular automaton based cloud computing load balancing task scheduling method
CN105159762A (en) * 2015-08-03 2015-12-16 冷明 Greedy strategy based heuristic cloud computing task scheduling method

Also Published As

Publication number Publication date
CN118277087A (en) 2024-07-02

Similar Documents

Publication Publication Date Title
Warneke et al. Exploiting dynamic resource allocation for efficient parallel data processing in the cloud
US8893148B2 (en) Performing setup operations for receiving different amounts of data while processors are performing message passing interface tasks
US8312464B2 (en) Hardware based dynamic load balancing of message passing interface tasks by modifying tasks
US8108876B2 (en) Modifying an operation of one or more processors executing message passing interface tasks
Hwang et al. Minimizing cost of virtual machines for deadline-constrained mapreduce applications in the cloud
US8127300B2 (en) Hardware based dynamic load balancing of message passing interface tasks
CN105843683B (en) Method, system and apparatus for dynamically optimizing platform resource allocation
US7698529B2 (en) Method for trading resources between partitions of a data processing system
US8544005B2 (en) Autonomic method, system and program product for managing processes
CN113095474A (en) Resource usage prediction for deep learning models
CN111488205B (en) Scheduling method and scheduling system for heterogeneous hardware architecture
CN112416585A (en) GPU resource management and intelligent scheduling method for deep learning
US20090064166A1 (en) System and Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks
El-Gamal et al. Load balancing enhanced technique for static task scheduling in cloud computing environments
Galante et al. Adaptive parallel applications: from shared memory architectures to fog computing (2002–2022)
Ravi et al. Valuepack: value-based scheduling framework for CPU-GPU clusters
CN118277087B (en) Resource scheduling policy determination method, medium, electronic device and program product
CN108287762B (en) Distributed computing interactive mode use resource optimization method and computer equipment
Yassir et al. Graph-based model and algorithm for minimising big data movement in a cloud environment
JP6732693B2 (en) Resource allocation control system, resource allocation control method, and program
Chahal et al. Simulation based job scheduling optimization for batch workloads
Hugo et al. A runtime approach to dynamic resource allocation for sparse direct solvers
Wu et al. Modeling the virtual machine launching overhead under fermicloud
Chhabra et al. Qualitative Parametric Comparison of Load Balancing Algorithms in Distributed Computing Environment
Upadhye et al. Cloud resource allocation as non-preemptive approach

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant