[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN102195886A - Service scheduling method on cloud platform - Google Patents

Service scheduling method on cloud platform Download PDF

Info

Publication number
CN102195886A
CN102195886A CN2011101413883A CN201110141388A CN102195886A CN 102195886 A CN102195886 A CN 102195886A CN 2011101413883 A CN2011101413883 A CN 2011101413883A CN 201110141388 A CN201110141388 A CN 201110141388A CN 102195886 A CN102195886 A CN 102195886A
Authority
CN
China
Prior art keywords
node
cloud platform
cpu
service
disk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011101413883A
Other languages
Chinese (zh)
Other versions
CN102195886B (en
Inventor
兰雨晴
王钧
孙坤建
冯运辉
黎立
张冠星
臧文娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongbiao Huian Information Technology Co Ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201110141388.3A priority Critical patent/CN102195886B/en
Publication of CN102195886A publication Critical patent/CN102195886A/en
Application granted granted Critical
Publication of CN102195886B publication Critical patent/CN102195886B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Multi Processors (AREA)

Abstract

The invention relates to a service scheduling method on a cloud platform, wherein the scheduling is to allocate the service requests of a user to each node of the cloud platform by a certain manner so as to achieve load balancing and improve the response service speed of the cloud platform. The method described by the invention mainly comprises the following steps: the node of the cloud platform periodically transmits the resource utilization rates of a local computer, such as the CPU utilization rate, the memory utilization rate and the magnetic disc utilization rate, to a scheduler to well reflect the actual load condition of each node of the cloud platform, and uses a weighted least-connection dynamic scheduling method to achieve WSSC (Web Service Scheduling On Cloud) while the resource utilization rate is considered. The method can improve the service response speed of the cloud platform to a great extent.

Description

Service scheduling method on a kind of cloud platform
Technical field
The present invention relates to the Computer Applied Technology field, relate in particular to the service scheduling method in the platform.
Background technology
At present, cloud computing becomes the topic that IT and internet industry are discussed warmly, has also caused the extensive concern of industry.Cloud computing is not a kind of new technology in fact, and is meant a kind of payment and use pattern of new IT infrastructure.Cloud computing is the product that traditional calculations machine technology such as grid computing, Distributed Calculation, parallel computation, effectiveness calculating, the network storage, virtual, load balancing and network technical development merge.It is intended to by network the relatively low computational entity of a plurality of costs is integrated into a perfect system with powerful calculating ability, and by advanced persons' such as SaaS, PaaS, IaaS, MSP business model this powerful computing ability is distributed in terminal use's hand.A core concept of cloud computing is exactly by improving constantly the disposal ability of " cloud ", and then the processing that reduces user terminal is born, finally make user terminal be simplified to a simple input-output equipment, and can enjoy the powerful computing ability of " cloud " as required.
The core concept of cloud computing is computational resource unified management and the scheduling that connects with network in a large number, constitutes a computational resource pond to user or application on-demand service.On-demand service just need be with traditional application deployment in the cloud platform, and provides particular user with service manner, and so, the service request that how to respond the user fast will become particularly important.
Summary of the invention
Related service scheduling method among the present invention has improved the speed of cloud platform response user service request, has guaranteed the load balancingization of each node in the cloud platform simultaneously.
Service scheduling method on a kind of cloud platform provided by the invention comprises that the following step poly-:
1) each node of cloud platform calculates cpu utilance, memory usage, and the disk utilance, and its result sent to scheduler by procotol.
2) according to the resource utilization of each node of cloud platform, scheduler adopts weighting method to calculate the weight of each node of cloud platform.
3) calculate the node that responds current service according to each node weight of cloud platform and linking number.
Wherein, in described step 1), utilize the idle process time to calculate cpu busy percentage divided by CPU total time; Utilize amount of free memory to obtain memory usage divided by total memory size; Utilize idle disk size to obtain the disk utilization rate divided by total disk size.
Wherein, in described step 2) in, the various resource utilizations in the step 1) are weighted on average, calculate the load weighted value of each node on the cloud platform.
Wherein, in described step 3), according to step 2) weighted value that obtains, utilize WLC (Weighted Least-Connection Scheduling) algorithmic dispatching service request.
Description of drawings
Fig. 1 is dispatching method of the present invention (WSSC) flow chart.
Fig. 2 is a WLC dispatching method flow chart.
Embodiment
For making feature of the present invention and advantage obtain clearer understanding,, be described in detail below below in conjunction with accompanying drawing: as shown in Figure 1,
Be example with the cloud platform that (SuSE) Linux OS is housed below, concrete enforcement of the present invention is described.
As shown in Figure 1, Fig. 1 right half part is represented the node in the cloud platform, and Fig. 1 left-half is represented the scheduler in the cloud platform.Node regularly obtains cpu busy percentage, memory usage and the disk utilance of self, and these information are sent to scheduler, after scheduler is received these information, adopt weighting method to calculate the weighted value of the resource operating position of each node, after scheduler is received service request, just according to the resource operating position and the linking number of current each node, adopt the WLC algorithm computation to go out the lightest node of load, and service request is transmitted to this node, allow it reply this service request, upgrade the connection numerical value of this node simultaneously.
Concrete steps are as follows:
Step 1: in linux system, file/proc/stat has preserved operating system from the start various times till now, such as user space program running time, and system attitude running time etc., so we only need to resolve this document and can obtain the CPU time.Code is as follows:
Figure BSA00000506152900031
In the above-mentioned code:
User_time represents to begin to be accumulated to current time from system start-up, and the CPU time of user's attitude does not comprise the CPU time of nice value for negative usefulness that process is put.
Nice_time represents to begin to be accumulated to current time from system start-up, and the nice value is the negative shared CPU time of process.
System_time represents to begin to be accumulated to current time from system start-up, the CPU time of kernel mode usefulness that process is put.
Idle_time represents to begin to be accumulated to current time from system start-up, other stand-by period except that the hard disk I0 stand-by period.
Iowait_time represents to begin to be accumulated to current time from system start-up, the hard disk IO stand-by period.,
Irq_time represents to begin to be accumulated to current time from system start-up, hard break period.
Softirq_time represents to begin to be accumulated to current time from system start-up, soft break period.
Above-mentioned every sum is CPU total time.It is cpu busy percentage that the present invention adopts 1-(idle+iowait)/cpu total time.
In like manner, the operating position of having preserved the current internal memory of system in/proc/meminfo only needs to resolve a this document and can obtain the internal memory operating position, and the code that calculates memory usage is as follows:
In the above-mentioned code, total represents total memory size, and free represents the current residual memory size, and the present invention represents memory usage with 1-free/total.
The instrument df that utilizes linux system to carry can obtain the disk operating position, and the code that calculates the disk utilance is as follows:
Figure BSA00000506152900042
In the above-mentioned code, " df-v|grep '/' | the 5th row of the row of awk ' { print$5} ' " expression obtain the output of df-v instrument and contain '/', these row are the disk utilance.Above-mentioned cpu utilance, memory usage, disk utilance are sent to scheduler by computer network with standard network protocol UDP, and its idiographic flow is shown in Fig. 1 right half part.
Step 2: in order to follow the simple and high-efficient principle, reduce the required overhead of algorithm itself as far as possible, the present invention analyzed cloud platform node service load properties with and the characteristics of service are provided.In the present invention, extract three parameters of node server: disk utilance Lstorage, cpu busy percentage Lcpu, memory usage Lmemory, these three calculate in step 1.
The current disposal ability of the node in the cloud platform is come out by the calculation of parameter of these three aspects of each server node.Concrete grammar: be weight coefficient ξ of each parameter setting (∑ ξ=1), the pairing ξ of each parameter decides node server service Effect on Performance degree according to parameter.
The present invention supposes existing service server n platform in the cloud platform, certain node server s in the cloud platform iAt a time the system of t utilizes situation L (s i),
L(s i)=ξ cpuLcpustorageL storagememoryL memory
i=0,1,2...,n-1,∑ξ=1;
Wherein, Lcpu is a cpu busy percentage, and Lstorage is the disk utilance, and Lmemory is a memory usage.
Consider the value condition of L (si): when this each parameter value of server node all was 1, just represented this server node for the running at full capacity situation this moment, and the value of L in this time (si) is 1; When this each load parameter of server node all got 0, this moment, this server was in idle condition, and L (si) is 0, so L (si) ∈ [0,1].
In actual conditions, be not allow CPU or internal memory to reach 100% utilance, so the present invention is provided with corresponding threshold values δ cpu for these three load parameters, δ storage, δ memory.Wherein δ cpu is the threshold values of Lcpu, and δ storage is the threshold values of Lstorage, and δ memory is the threshold values of Lmemory, works as Lcpu, and Lstorage when any one among the Lmemory surpasses threshold values, just can judge that this node server is fully loaded.Fully loaded server node does not participate in scheduling.
The present invention also takes into full account the difference of finishing the disposal ability between the point server, has introduced this notion of intrinsic disposal ability and has quantized node server maximum processing capability.The intrinsic disposal ability of representing the node server in the following description of this paper with parameter Φ.The value of Φ is big more, and the disposal ability of node server is strong more.For example, desirable Φ value is the average execution speed of single-length fixed point instruction.
The current situation of utilizing of system is used after L (si) quantification, and the intrinsic disposal ability Φ of server node that set before basis provides dynamic weights adjustment model:
Figure BSA00000506152900061
λ in the above-mentioned formula is the adjustment factor that weights change, and λ is big more, and the variation of L (si) is obvious more to the influence of W.
W gets 0 when other situation occurring, and other situation comprises the situation of node server full load running, or wrong situation (may be the mistake that hardware causes, also might be the mistake that software causes) appears in the node server.W gets 0 server and does not participate in scheduling.
The above-mentioned good server node processing capability in real time situation of having annotated, system's present load of L (si) expression, with the server handling ability inverse ratio that W represents, promptly when the load of server node was heavy more, its ability of handling new task accordingly was just more little.This relationship between expression can be described the processing capability in real time of server node more accurately.Its code is achieved as follows:
Figure BSA00000506152900062
Figure BSA00000506152900071
In the above-mentioned code, phi represents the intrinsic disposal ability of node, and cpu represents the cpu utilance, and max_cpu represents cpu peak use rate threshold values, and xicpu represents cpu disposal ability weight, and other by that analogy.
Step 3: after in step 2, calculating the current weight W (si) of every node server node, according to the effective linking number C of current user (si), in conjunction with existing WLC algorithm, find out a minimum node server of C (si)/W (si) value, the value of this ratio gained is more little, the load that this node server is described is light more, user's service request that more capable processing is newly arrived.
Suppose node server set S={s 0, s 1..., s N-1, n is the server total number;
Node server load set L={L (s 0), L (s 1) ..., L (s N-1), s wherein i(i=0,1 ..., n-1) be the node server, L (s i) (i=0,1 ..., n-1) be node server s iLoad;
W (s i) expression node server s iWeights, the expression s iDisposal ability;
C (s i) expression node server s iCurrent effective linking number;
C SUM = Σ i = 0 n - 1 C ( s i )
The current effective linking number summation of system:
When new task requests reached, this new request connection can be sent to node server sj, and and if only if, and it meets the following conditions:
C ( s j ) / C SUM W ( s j ) = min { C ( s i ) / C SUM W ( s i ) }
C ( s j ) W ( s j ) = min { C ( s i ) W ( s i ) }
, i=0,1 ..., n-1, W (s j) ≠ 0 and W (s i) ≠ 0;
Total be exactly WLC picks out percentage that current linking number in the node server accounts for linking number and its weights than reckling, what meet this condition is the minimum load node.
Because multiplication required cpu cycle is less than division, and trouble-free server weights are all greater than zero, when guaranteeing that W (sj) is zero, this server node can invoked prerequisite under, the Rule of judgment of formula 3-3 can be optimized for:
C(sj)*W(si)>C(si)*W(sj)
Wherein i, j represent nodes different in the cloud platform, and algorithm mainly is divided into two parts: at first the node server from i=0 begins to search for available node server, i.e. W (si)>0; In the middle of available node server, find out the node server that satisfies above-mentioned formula then, be the node server of the load minimum of WLC gained.Code is achieved as follows:
Figure BSA00000506152900081
Above-mentioned code is promptly chosen a lightest node of load in N node, weight represents the weight of each node of calculating in the step 2; Con represents the current linking number of each node.That is to say that we have replaced the method for the calculating weight of existing WLC algorithm with the weighing computation method in the step 2 here.The loading condition of each node in more can actual response cloud platform, thus make service dispatch more reasonable.
Above-described example has been done detailed explanation to the implementation of various piece of the present invention; but specific implementation form of the present invention is not limited thereto; for the those skilled in the art in present technique field, the various conspicuous change of under the situation of spirit that does not deviate from the method for the invention and claim scope it being carried out is all within protection scope of the present invention.

Claims (6)

1. the dispatching method of serving on the cloud platform is characterized in that it may further comprise the steps:
1) each node of cloud platform calculates cpu utilance, memory usage and disk utilance, and its result is sent to scheduler by procotol;
2) according to the resource utilization of each node of cloud platform, scheduler adopts weighting method to calculate the weight of each node of cloud platform;
3) calculate the node that responds current service according to each node weight of cloud platform and linking number.
2. the method for claim 1 is characterized in that: in described step 1), utilize the time of CPU execution idle process to obtain cpu busy percentage divided by CPU total time.
3. the method for claim 1 is characterized in that: in described step 1), utilize amount of free memory to obtain memory usage divided by total memory size.
4. the method for claim 1 is characterized in that: in described step 1), utilize idle disk size to obtain the disk utilization rate divided by total disk size.
5. the method for claim 1 is characterized in that: in described step 2) in, the weighted value of each node in the cpu busy percentage that obtains in the step 1), memory usage, the disk utilance weighted average calculation cloud platform utilized.
6. the method for claim 1, it is characterized in that: in described step 3), according to step 2) weighted value that obtains, utilize WLC (Weighted Least-Connection Scheduling) algorithm computation to go out the lightest node of load, and service request is transmitted to this node, allow it reply this service request, upgrade the connection numerical value of this node simultaneously.
CN201110141388.3A 2011-05-30 2011-05-30 Service scheduling method on cloud platform Active CN102195886B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110141388.3A CN102195886B (en) 2011-05-30 2011-05-30 Service scheduling method on cloud platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110141388.3A CN102195886B (en) 2011-05-30 2011-05-30 Service scheduling method on cloud platform

Publications (2)

Publication Number Publication Date
CN102195886A true CN102195886A (en) 2011-09-21
CN102195886B CN102195886B (en) 2014-02-05

Family

ID=44603294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110141388.3A Active CN102195886B (en) 2011-05-30 2011-05-30 Service scheduling method on cloud platform

Country Status (1)

Country Link
CN (1) CN102195886B (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102710779A (en) * 2012-06-06 2012-10-03 合肥工业大学 Load balance strategy for allocating service resource based on cloud computing environment
CN102801766A (en) * 2011-11-18 2012-11-28 北京安天电子设备有限公司 Method and system for load balancing and data redundancy backup of cloud server
CN103024081A (en) * 2013-01-04 2013-04-03 福建星网视易信息系统有限公司 Peer-to-peer communication terminal dispatching method adaptable to time-effect-guaranteed communication systems
CN103179048A (en) * 2011-12-21 2013-06-26 中国电信股份有限公司 Method and system for changing main machine quality of service (QoS) strategies of cloud data center
CN103179217A (en) * 2013-04-19 2013-06-26 中国建设银行股份有限公司 Load balancing method and device applicable to WEB application server group
CN103258149A (en) * 2012-07-27 2013-08-21 天津中启创科技有限公司 Online reading system and method based on cloud computing
CN103338228A (en) * 2013-05-30 2013-10-02 江苏大学 Cloud calculating load balancing scheduling algorithm based on double-weighted least-connection algorithm
CN103812895A (en) * 2012-11-12 2014-05-21 华为技术有限公司 Scheduling method, management nodes and cloud computing cluster
CN104023042A (en) * 2013-03-01 2014-09-03 清华大学 Cloud platform resource scheduling method
CN104111875A (en) * 2014-07-03 2014-10-22 重庆大学 Device, system and method for dynamically controlling number of newly-increased tasks at cloud data center
CN104182359A (en) * 2013-05-23 2014-12-03 杭州宏杉科技有限公司 Buffer allocation method and device thereof
CN104717439A (en) * 2014-01-02 2015-06-17 杭州海康威视系统技术有限公司 Data flow control method and device thereof in video storage system
CN104780210A (en) * 2015-04-13 2015-07-15 杭州华三通信技术有限公司 Load balancing method and device
CN104852860A (en) * 2015-05-04 2015-08-19 四川大学 Queue-based multi-target scheduling strategy for heterogeneous resources
WO2015144089A1 (en) * 2014-03-28 2015-10-01 Tencent Technology (Shenzhen) Company Limited Application recommending method and apparatus
CN105007337A (en) * 2015-08-20 2015-10-28 浪潮(北京)电子信息产业有限公司 Cluster system load balancing method and system thereof
CN105049509A (en) * 2015-07-23 2015-11-11 浪潮电子信息产业股份有限公司 Cluster scheduling method, load balancer and clustering system
CN105100237A (en) * 2015-07-15 2015-11-25 浪潮(北京)电子信息产业有限公司 Scheduling control method and scheduling control system
CN105302638A (en) * 2015-11-04 2016-02-03 国家计算机网络与信息安全管理中心 MPP (Massively Parallel Processing) cluster task scheduling method based on system load
CN105335229A (en) * 2014-07-25 2016-02-17 杭州华三通信技术有限公司 Business resource scheduling method and apparatus
CN106612310A (en) * 2015-10-23 2017-05-03 腾讯科技(深圳)有限公司 A server scheduling method, apparatus and system
CN106878042A (en) * 2015-12-18 2017-06-20 北京奇虎科技有限公司 Container resource regulating method and system based on SLA
CN106911772A (en) * 2017-02-20 2017-06-30 联想(北京)有限公司 Server-assignment method, server-assignment device and electronic equipment
CN107247729A (en) * 2017-05-03 2017-10-13 中国银联股份有限公司 A kind of document handling method and device
CN107395708A (en) * 2017-07-14 2017-11-24 郑州云海信息技术有限公司 A kind of method and apparatus for handling download request
CN108449215A (en) * 2018-03-31 2018-08-24 甘肃万维信息技术有限责任公司 Based on distributed server performance monitoring system
CN108551489A (en) * 2018-05-07 2018-09-18 广东电网有限责任公司 A kind of application server load balancing method, system, device and storage medium
CN108874535A (en) * 2018-05-14 2018-11-23 中国平安人寿保险股份有限公司 A kind of task adjusting method, computer readable storage medium and terminal device
CN109995818A (en) * 2017-12-29 2019-07-09 中移(杭州)信息技术有限公司 A kind of method and device of server load balancing
CN110049143A (en) * 2019-05-31 2019-07-23 华迪计算机集团有限公司 Load-balancing method and device
US20210400115A1 (en) * 2016-09-16 2021-12-23 Oracle International Corporation Cloud operation reservation system
CN116112493A (en) * 2023-02-09 2023-05-12 网易(杭州)网络有限公司 Communication method, device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719082A (en) * 2009-12-24 2010-06-02 中国科学院计算技术研究所 Method and system for dispatching application requests in virtual calculation platform

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719082A (en) * 2009-12-24 2010-06-02 中国科学院计算技术研究所 Method and system for dispatching application requests in virtual calculation platform

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《计算机应用》 20071130 龚梅等 "一种集群系统的透明动态反馈负载均衡算法" 第2662-2665页 1-6 , *
龚梅等: ""一种集群系统的透明动态反馈负载均衡算法"", 《计算机应用》, 30 November 2007 (2007-11-30), pages 2662 - 2665 *

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801766B (en) * 2011-11-18 2015-01-07 北京安天电子设备有限公司 Method and system for load balancing and data redundancy backup of cloud server
CN102801766A (en) * 2011-11-18 2012-11-28 北京安天电子设备有限公司 Method and system for load balancing and data redundancy backup of cloud server
CN103179048A (en) * 2011-12-21 2013-06-26 中国电信股份有限公司 Method and system for changing main machine quality of service (QoS) strategies of cloud data center
CN103179048B (en) * 2011-12-21 2016-04-13 中国电信股份有限公司 Main frame qos policy transform method and the system of cloud data center
CN102710779A (en) * 2012-06-06 2012-10-03 合肥工业大学 Load balance strategy for allocating service resource based on cloud computing environment
CN102710779B (en) * 2012-06-06 2014-09-24 合肥工业大学 Load balance strategy for allocating service resource based on cloud computing environment
CN103258149A (en) * 2012-07-27 2013-08-21 天津中启创科技有限公司 Online reading system and method based on cloud computing
CN103812895A (en) * 2012-11-12 2014-05-21 华为技术有限公司 Scheduling method, management nodes and cloud computing cluster
CN103024081A (en) * 2013-01-04 2013-04-03 福建星网视易信息系统有限公司 Peer-to-peer communication terminal dispatching method adaptable to time-effect-guaranteed communication systems
CN103024081B (en) * 2013-01-04 2016-01-20 福建星网锐捷通讯股份有限公司 Be applicable to the terminal scheduling method of the point-to-point communication of effective guarantee communication system
CN104023042A (en) * 2013-03-01 2014-09-03 清华大学 Cloud platform resource scheduling method
CN104023042B (en) * 2013-03-01 2017-05-24 清华大学 Cloud platform resource scheduling method
CN103179217B (en) * 2013-04-19 2016-01-13 中国建设银行股份有限公司 A kind of load-balancing method for WEB application server farm and device
CN103179217A (en) * 2013-04-19 2013-06-26 中国建设银行股份有限公司 Load balancing method and device applicable to WEB application server group
CN104182359A (en) * 2013-05-23 2014-12-03 杭州宏杉科技有限公司 Buffer allocation method and device thereof
CN104182359B (en) * 2013-05-23 2017-11-14 杭州宏杉科技股份有限公司 A kind of cache allocation method and device
CN103338228B (en) * 2013-05-30 2016-12-28 江苏大学 Cloud computing load balancing dispatching algorithms based on double weighting Smallest connection algorithms
CN103338228A (en) * 2013-05-30 2013-10-02 江苏大学 Cloud calculating load balancing scheduling algorithm based on double-weighted least-connection algorithm
CN104717439A (en) * 2014-01-02 2015-06-17 杭州海康威视系统技术有限公司 Data flow control method and device thereof in video storage system
CN104717439B (en) * 2014-01-02 2017-12-01 杭州海康威视系统技术有限公司 Data flow control method and its device in Video Storage System
US10679132B2 (en) 2014-03-28 2020-06-09 Tencent Technology (Shenzhen) Company Limited Application recommending method and apparatus
US9953262B2 (en) 2014-03-28 2018-04-24 Tencent Technology (Shenzhen) Company Limited Application recommending method and apparatus
WO2015144089A1 (en) * 2014-03-28 2015-10-01 Tencent Technology (Shenzhen) Company Limited Application recommending method and apparatus
CN104111875A (en) * 2014-07-03 2014-10-22 重庆大学 Device, system and method for dynamically controlling number of newly-increased tasks at cloud data center
CN104111875B (en) * 2014-07-03 2017-11-28 重庆大学 Cloud data center increases number of tasks device for controlling dynamically, system and method newly
CN105335229B (en) * 2014-07-25 2020-07-07 新华三技术有限公司 Scheduling method and device of service resources
CN105335229A (en) * 2014-07-25 2016-02-17 杭州华三通信技术有限公司 Business resource scheduling method and apparatus
CN104780210A (en) * 2015-04-13 2015-07-15 杭州华三通信技术有限公司 Load balancing method and device
CN104780210B (en) * 2015-04-13 2019-01-25 新华三技术有限公司 Load-balancing method and device
CN104852860A (en) * 2015-05-04 2015-08-19 四川大学 Queue-based multi-target scheduling strategy for heterogeneous resources
CN104852860B (en) * 2015-05-04 2019-04-23 四川大学 A kind of heterogeneous resource Multiobjective Scheduling strategy based on queue
CN105100237A (en) * 2015-07-15 2015-11-25 浪潮(北京)电子信息产业有限公司 Scheduling control method and scheduling control system
CN105049509A (en) * 2015-07-23 2015-11-11 浪潮电子信息产业股份有限公司 Cluster scheduling method, load balancer and clustering system
CN105007337A (en) * 2015-08-20 2015-10-28 浪潮(北京)电子信息产业有限公司 Cluster system load balancing method and system thereof
CN106612310A (en) * 2015-10-23 2017-05-03 腾讯科技(深圳)有限公司 A server scheduling method, apparatus and system
CN105302638A (en) * 2015-11-04 2016-02-03 国家计算机网络与信息安全管理中心 MPP (Massively Parallel Processing) cluster task scheduling method based on system load
CN105302638B (en) * 2015-11-04 2018-11-20 国家计算机网络与信息安全管理中心 MPP cluster task dispatching method based on system load
CN106878042A (en) * 2015-12-18 2017-06-20 北京奇虎科技有限公司 Container resource regulating method and system based on SLA
US11503128B2 (en) * 2016-09-16 2022-11-15 Oracle International Corporation Cloud operation reservation system
US20210400115A1 (en) * 2016-09-16 2021-12-23 Oracle International Corporation Cloud operation reservation system
CN106911772A (en) * 2017-02-20 2017-06-30 联想(北京)有限公司 Server-assignment method, server-assignment device and electronic equipment
CN107247729B (en) * 2017-05-03 2021-04-27 中国银联股份有限公司 File processing method and device
CN107247729A (en) * 2017-05-03 2017-10-13 中国银联股份有限公司 A kind of document handling method and device
CN107395708A (en) * 2017-07-14 2017-11-24 郑州云海信息技术有限公司 A kind of method and apparatus for handling download request
CN107395708B (en) * 2017-07-14 2021-04-02 郑州云海信息技术有限公司 Method and device for processing download request
CN109995818A (en) * 2017-12-29 2019-07-09 中移(杭州)信息技术有限公司 A kind of method and device of server load balancing
CN108449215A (en) * 2018-03-31 2018-08-24 甘肃万维信息技术有限责任公司 Based on distributed server performance monitoring system
CN108551489A (en) * 2018-05-07 2018-09-18 广东电网有限责任公司 A kind of application server load balancing method, system, device and storage medium
CN108874535A (en) * 2018-05-14 2018-11-23 中国平安人寿保险股份有限公司 A kind of task adjusting method, computer readable storage medium and terminal device
CN108874535B (en) * 2018-05-14 2022-06-10 中国平安人寿保险股份有限公司 Task adjusting method, computer readable storage medium and terminal device
CN110049143A (en) * 2019-05-31 2019-07-23 华迪计算机集团有限公司 Load-balancing method and device
CN116112493A (en) * 2023-02-09 2023-05-12 网易(杭州)网络有限公司 Communication method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN102195886B (en) 2014-02-05

Similar Documents

Publication Publication Date Title
CN102195886B (en) Service scheduling method on cloud platform
CN103713956B (en) Method for intelligent weighing load balance in cloud computing virtualized management environment
EP2466460B1 (en) Compiling apparatus and method for a multicore device
EP3129880B1 (en) Method and device for augmenting and releasing capacity of computing resources in real-time stream computing system
Enokido et al. Process allocation algorithms for saving power consumption in peer-to-peer systems
CN102111337B (en) Method and system for task scheduling
CN103338228A (en) Cloud calculating load balancing scheduling algorithm based on double-weighted least-connection algorithm
Peschlow et al. A flexible dynamic partitioning algorithm for optimistic distributed simulation
CN102281290B (en) Emulation system and method for a PaaS (Platform-as-a-service) cloud platform
CN104902001B (en) Web request load-balancing method based on operating system virtualization
KR20110049507A (en) Apparatus and method for executing application
Stavrinides et al. Scheduling real‐time bag‐of‐tasks applications with approximate computations in SaaS clouds
CN105760227B (en) Resource regulating method and system under cloud environment
Duolikun et al. An energy-aware algorithm to migrate virtual machines in a server cluster
Stavrinides et al. Cost‐aware cloud bursting in a fog‐cloud environment with real‐time workflow applications
CN103488538B (en) Application extension device and application extension method in cloud computing system
CN114706689B (en) Multi-core processor task scheduling method and system based on subtask characteristics
Duolikun et al. An energy-efficient process migration approach to reducing electric energy consumption in a cluster of servers
Samir et al. Autoscaling recovery actions for container‐based clusters
CN104519082B (en) A kind of expansion method and device of cloud computing
Xue et al. BOLAS: bipartite-graph oriented locality-aware scheduling for MapReduce tasks
Shahapure et al. Distance and traffic based virtual machine migration for scalability in cloud computing
CN108268310B (en) Method and device for determining minimum scheduling granularity
CN112148474B (en) Loongson big data all-in-one self-adaptive task segmentation method and system for load balancing
Mao et al. Efficient subtorus processor allocation in a multi-dimensional torus

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: BEIHANG UNIVERSITY

Free format text: FORMER OWNER: LAN YUQING

Effective date: 20130718

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100084 HAIDIAN, BEIJING TO: 100191 HAIDIAN, BEIJING

TA01 Transfer of patent application right

Effective date of registration: 20130718

Address after: 100191 Haidian District, Xueyuan Road, No. 37,

Applicant after: Beihang University

Address before: 205, room 2, building 15, building 100084, brown stone garden, Dongmen east gate, Old Summer Palace, Beijing, Haidian District

Applicant before: Lan Yuqing

C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210415

Address after: No.217, 2nd floor, block a, No.51, Kunming Hunan Road, Haidian District, Beijing

Patentee after: CHINA STANDARD INTELLIGENT SECURITY INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 100191 No. 37, Haidian District, Beijing, Xueyuan Road

Patentee before: BEIHANG University