CN110149395A - One kind is based on dynamic load balancing method in the case of mass small documents high concurrent - Google Patents
One kind is based on dynamic load balancing method in the case of mass small documents high concurrent Download PDFInfo
- Publication number
- CN110149395A CN110149395A CN201910418947.7A CN201910418947A CN110149395A CN 110149395 A CN110149395 A CN 110149395A CN 201910418947 A CN201910418947 A CN 201910418947A CN 110149395 A CN110149395 A CN 110149395A
- Authority
- CN
- China
- Prior art keywords
- node
- value
- data server
- small documents
- high concurrent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Debugging And Monitoring (AREA)
Abstract
The invention discloses one kind based on dynamic load balancing method in the case of mass small documents high concurrent.This method is inscribed mainly for the high concurrent access occurred in mass small documents distributed memory system, pass through periodic monitor and collects the CPU usage in distributed file storage system FastDFS in data server node, memory usage, the indication informations such as disk I/O utilization rate and network bandwidth, the importance ratio of these indexs is determined using comprehensive evaluation simultaneously, and these indication informations are issued scheduler, then scheduler passes through the integrated load situation of each data server node of weighted calculation and the weighted value of dynamic adjusting data server node, server resource can be made full use of while to realize the efficient process in the case of mass small documents high concurrent again.
Description
Technical field
The present invention relates to for network load balancing method, exist more particularly to a kind of distributed small documents storage system of processing
Load-balancing method in the case of high concurrent access belongs to a kind of scheduling of resource technology of distributed file system storage access.
Background technique
High speed development and more more and more universal, the epoch that people will interconnect into all things on earth with mobile Internet.Daily
The mobile data information of capital generation magnanimity, the short-sighted frequency social activity APP and electric business APP especially constantly risen in recent years, these
APP can generate a large amount of short-sighted frequency and picture daily, their data occupied space is relatively small, generally all on the left side tens KB
The right side, highest is also with regard to tens M.
However such as HDFS, GFS distributed document storage system known to us is both for big file, they
Itself is primarily used to solve the distribution and scheduling of the data block of different data server node although with load-balancing mechanism
Problem is not used to solve the mass small documents access problem in high concurrent.Distributed file system FastDFS is needle
To the distributed file storage system that mass small documents storage access specially designs, but its load-balancing algorithm provided is base
In the load-balancing algorithm of poll.Polling algorithm is that mechanical rear end storage of requesting user to be alternately assigned in order takes
It is engaged on device, does not account for the performance of itself and real time load of back-end server node, its reliability is often in practical applications
It is very poor.And current some common load-balancing algorithms:
1) weighted polling: being the upgrade version of polling dispatching algorithm, and this load-balancing algorithm gives each service when starting
Device distributes a weighted value, this weighted value is determined according to server self performance, its higher weighted value of performance is more
Greatly.Scheduler carrys out distribution request according to the weighted value ratio of server, weighted value it is high be assigned to more requests, weighted value
Low is then opposite.Since server weight value cannot dynamically change according to the real time load situation of server, this method fortune
The server that will appear overload during row, which is still assigned request, causes user to request processing failure even server delay machine.
2) minimum connection number: scheduler will record connection number of the back-end server on certain port and handle request
Quantity can request assignment to the least server of connection number new every time.Rear end takes this allocation algorithm in a distributed system
General work is preferable in the identical situation of processing capacity of device of being engaged in.But actual distributed system is usually heterogeneous network, it should
Distribution method does not account for the performance and loading condition of server in this case, and performance may sharply decline.
3) the fastest response time: scheduler, come the intelligent distribution that makes requests, is rung according to the response time of back-end server
Short server is by preferential distribution request between seasonable.Since the response time is usually that scheduler sends back-end server node
One probe requests thereby obtains, therefore the response time has hysteresis quality, cannot reflect true situation.
Summary of the invention
The present invention according to existing distributed file storage system from the case of processing high concurrent access it is existing not
Foot, while the similarly not applicable high concurrent small documents of existing load-balancing method access situation, provide for distributed document
The dynamic self-adapting load equilibration scheduling method of the mass small documents high concurrent access of system FastDFS.
The present invention is based on dynamic load balancing methods in the case of mass small documents high concurrent to pass through acquisition Back end data service
The load information of device node, and their weighted value is adjusted according to load information dynamic, to fully and rationally use back-end services
The resource of device.
The object of the invention is achieved through the following technical solutions:
One kind being included in following steps based on dynamic load balancing method in the case of mass small documents high concurrent:
1) each node is switched on a timed task in data server cluster, periodically collects the utilization of its each resource
Situation, resource utilization include cpu busy percentage Ucpu, memory usage Umem, disk I/O utilization UioIt is utilized with network bandwidth
Rate Unet;With DiIndicate i-th of node in data server cluster, i-th of node DiCpu busy percentage Ucpu(i) calculation method
Are as follows:WhereinIt is time period t1~t2Interior DiThe User space cpu busy percentage and kernel state CPU of node utilize it
With,It is time period t1~t2Interior CPU time summation, t1Refer to the time of last computation, t2Refer to this time calculated;
I-th of node DiMemory usage Umem(i) it calculates are as follows:Wherein MuseIt indicates in DiSection
Point is in t2The memory usage amount at time point, MtotalIndicate DiTotal memory;
I-th of node DiMagnetic disc i/o utilization rate be Uio(i);
I-th of node DiNetwork bandwidth utilization factorWhereinIt is time period t1~t2Interior DiSection
The network bandwidth of point, netmaxIt is the maximum bandwidth of network interface card;
2) U of the data server node itselfcpu、Umem、UioAnd UnetIt is sent to dispatch server;
3) dispatch server receives the U of data server nodecpu、Umem、UioAnd UnetIt is calculated later using following formula
The load value Load of each data server nodei:
Loadi=R1×Ucpu(i)+R2×Umem(i)+R3×Uio(i)+R4×Unet(i);
Wherein R1, R2, R3, R4CPU, memory, magnetic disc i/o and network bandwidth impact factor respectively, and meet i ∈ 1,2,
3,4},
4) when Back end data server node brings into operation, i-th of section is given according to the hardware configuration information of i-th of node
The initial weighted value W of pointi, WiIndicate the weighted value of i-th of node in data server cluster;
5) the real time weight value W of data server nodeiCalculate with following formula and acquire:
DescribedFor constant, indicates the error allowed when calculating, indicate if server is supported on LoadbestOn
Lower fluctuation is no more thanWhen do not need to change its weight;WithIt is used in and calculates what data server node weight was adjusted
Coefficient, andLoadbestIt is the desired load value of system, Loadbest∈
[0.7,0.8];
6) scheduler node dynamically changes request according to the real time weight value for calculating each data server node
The distribution of task, weighted value it is high will receive more request, weight it is low receive less request.
To further realize the object of the invention, it is preferable that describedWithBy read Linux under virtual file/
Proc/stat is calculated.
Preferably, the MuseAnd MtotalIt is calculated by reading virtual file/proc/meminfo under Linux.
Preferably, the Uio(i) iostat-x-d sda order is executed directly in Linux to obtain;This order is held
A %util parameter value can be obtained after row, which indicates the utilization rate of current time system disk IO;
Preferably, describedCalculate by reading virtual file/proc/net/dev under Linux
It arrives.
Preferably, the netmaxIt is obtained using ethtool tool.
Preferably, the initial weighted value W of i-th of nodeiAdjusting range be [Wmin,Wmax], wherein Wmin=0,
Wmax=max (Wi);max(wi) it is desired load value, desired load value Loadbest∈[0.7,0.8]。
Preferably, describedTake 0.03;LoadbestTake 0.7;DescribedValue use Apache
Benchmark testing tool carries out multiple groups pressure test to data server cluster and practical application scene determines
In order to preferably reach the purpose of the present invention, in computational load value it needs to be determined that Ucpu、Umem、UioAnd UnetWeight
Want property coefficient R1, R2, R3And R4Value.Here data server cluster is carried out using Apache Benchmark testing tool
Multiple groups pressure test simultaneously collects each test result, here includes average request response time and throughput of system;Benefit simultaneously
It is for statistical analysis with TOPSIS comprehensive evaluation, obtain optimal indices scalefactor value.Preferably, the R1,
R2, R3, R4It determines as follows:
(1) R is given1, R2, R3And R4One initial value, then carries out pressure test with Apache Benchmark;
(2) each test result is collected;
(3) multiple test result is formed into a matrix D;
(4) data in matrix are normalized to obtain decision matrix Z;
(5) processing is weighted to matrix, obtains weighted decision matrix V;
(6) ideal solution and minus ideal result are determined, i.e., selects maximum value and minimum value in each test result;
(7) distance D of each test result of calculating to ideal point+With the distance D of Negative ideal point-, definition
Each test result is evaluated, and the bigger expression test result of C is better, at this time selected R1, R2, R3And R4's
It is worth optimal.
Preferably, the R1, R2, R3And R4Initial value setting be respectively as follows: R1=0.2, R2=0.25, R3=0.3, R4
=0.25.
Compared with the existing technology, the invention has the advantages that:
1) compared to the load-balancing algorithm of the offer of existing distributed file system itself, after the present invention has fully considered
The real time load ability of end data server node, and its weighted value is dynamically adjusted according to it;
2) present invention acquires many index of Back end data server node, and the significant coefficient of indices can basis
Actual conditions are adjusted, so that calculated weighted value is more bonded its actual conditions.
Detailed description of the invention
Fig. 1 is the procedure chart of the ideal value of acquisition data server node indices scale parameter of the invention.
Fig. 2 is the overview flow chart that the load-balancing algorithm under distributed file storage system of the present invention executes.
Fig. 3 is mass small documents storage access system architecture diagram of the invention.
Fig. 4 is the comparison diagram of the request processing time of method and weighted polling load-balancing algorithm of the invention.
The comparison diagram of the system throughput of the position Fig. 5 method of the invention and weighted polling load-balancing algorithm.
Specific embodiment
In order to better understand the present invention, explanation is further said to do to the present invention with reference to the accompanying drawings and examples
It is bright, but embodiments of the present invention are not limited to the range of embodiment expression:
Embodiment
The present embodiment is to store mass small documents using distributed file storage system FastDFS, in access process
It is mainly concerned with the read-write of file, is I/O intensive task.Therefore the system performance master in the case where high concurrent reads small documents
Will be by following because usually determining: 1) frequent reading file will lead to the frequent magnetic disc i/o of system progress and operate, therefore to magnetic
The performance requirement of disk I/O is very high;2) Installed System Memory will do it mass data with disk and interact during file read-write, can cache
Heap file content causes EMS memory occupation excessively high into Installed System Memory, therefore memory also plays an important role to system performance;3)
The file of client request needs memory node to be transmitted by network, needs to transmit heap file number in the case of high concurrent
According to, therefore network bandwidth is an important factor for influencing data efficient transmission;4) high concurrent access file will lead to CPU frequently into
The switching of row context, the utilization rate of CPU can also generate certain influence to system performance.Generally speaking read in high concurrent small documents
In the case of memory, magnetic disc i/o utilization rate and network bandwidth play a key effect to system performance, therefore these three dynamic indicators
Coefficients R2, R3, R4It is higher, R1It is lower.R1, R2, R3And R4It is U respectivelycpu、Umem、UioAnd UnetImportant coefficient;Ucpu、Umem、
UioAnd UnetIt is cpu busy percentage, memory usage, magnetic disc i/o utilization rate and network bandwidth utilization factor respectively.
When Fig. 1 is the ideal value and adjustment weight of the influence ratio of indices when calculating data server node load
Two can specifically be divided into following steps for convergent coefficient value:
Step 101, first according to the configuration of data server, such as according to CPU number, situations such as frequency, memory size
To Ucpu、Umem、UioAnd UnetImportant coefficient R1, R2, R3And R4One initial weight, wherein R1=0.2, R2=0.25, R3
=0.3, R4=0.25;
Step 102, pressure survey is carried out to distributed file system FastDFS using Apache Benchmark testing tool
Examination, and collect test result;
Step 103, it adjusts and continues to use Apache after modifying the coefficient values of server data node indices
The multiple pressure test of Benchmark testing tool;
Step 104, after multiple pressure test, by statisticalling analyze the result for comparing and testing every time;
It is assumed that having carried out m test, there is n test result every time, is formed matrix D:
Data in matrix are normalized and obtain decision matrix Z:
Enable wjFor the weight of j-th of index, the importance degree of jth and index is indicated, then continue to construct weighted decision square
Battle array V:
vij=wj×rij, i=1,2 ..., m, j=1,2 ..., n
Determine ideal solution V+With minus ideal result V-, i.e. v in weighted decision matrix VijBigger expression test result is better:
Test result is evaluated:
CiBigger expression i-th test result is better, then it represents that selected indices parameter is more excellent at this time;
Step 105, it show that optimal indices influence ratio value, surveyed by repeatedly pressing and each test result is received
Collection, which gets up, is determined optimal solution with above-mentioned TOPSIS process, show that the scalefactor value of indices is respectively R1=0.15, R2
=0.3, R3=0.35, R4=0.2.
If Fig. 2 is the overview flow chart that the load-balancing algorithm under distributed file storage system executes, specific steps packet
It includes:
Step 201, each node is switched on a timed task in data server cluster, periodically collects its each resource
Utilization power, resource utilization includes cpu busy percentage Ucpu, memory usage Umem, disk I/O utilization UioAnd Netowrk tape
Wide utilization rate Unet;With DiIndicate i-th of node in data server cluster, i-th of node DiCpu busy percentage Ucpu(i) it counts
Calculation method are as follows:WhereinIt is time period t1~t2Interior DiThe User space cpu busy percentage and kernel state CPU of node
The sum of utilize,It is time period t1~t2Interior CPU time summation, t1Refer to the time of last computation, t2Refer to what this was calculated
Time;
I-th of node DiMemory usage Umem(i) it calculates are as follows:Wherein MuseIt indicates in DiSection
Point is in t2The memory usage amount at time point, MtotalIndicate DiTotal memory;
I-th of node DiMagnetic disc i/o utilization rate be Uio(i);
I-th of node DiNetwork bandwidth utilization factorWhereinIt is time period t1~t2Interior DiSection
The network bandwidth of point, netmaxIt is the maximum bandwidth of network interface card;
Scheduler node passes through monitoring using the epoll multiplexing technique in (SuSE) Linux OS in Socket programming
9999 ports, Back end data server node can be all attached with it;
Step 202, back-end server node all can periodically utilize the cpu busy percentage of oneself, memory usage, disk I/O
The indication informations such as rate and connection number issue scheduler by connection;
Step 203, after scheduler receives the indication information of each data server node, its load value is first calculated,
Calculate its weighted value:
Loadi=0.15 × Ucpu(i)+0.3×Umem(i)+0.35×Uio(i)+0.2×Unet... ,=1,2, (i), n
The present inventionFor constant, the error that can permit in a certain range when calculating is indicated, indicate if server is born
It is loaded in LoadbestFluctuation is no more than up and downWhen do not need to change its weight.AndWithIt is used in calculating data server node
The coefficient that weight is adjusted can quickly reduce its weighted value in the overload of data server node in this way, with
Mitigate its burden;Slowly increase its weighted value when the load of data server node is smaller, is unlikely to that it is made to bear too fast add
Weight.Simultaneously in view of each resource utilization cannot be too high, the too high server that will lead to normally can not externally provide service, therefore
Need give each resource utilization set a threshold value, need its weighted value W being set as 0 more than the threshold value scheduler, prevent after
Continue for its distribution request, this is defined herein as 95%.
Step 204, scheduler just changes according to weight dynamic after the weighted value for calculating Back end data server node
Requests dispatching strategies designed, that is, big meeting of weight is assigned more requests, and the small meeting of weight is assigned to a small amount of request will not even be into
Row distribution request:
Assuming that there are three server s in rear end1、s2And s3, scheduled device dynamic changes their weighted value in the process of running
It is 4,2 and 1, then the specific assigning process of scheduler is as shown in the table when scheduler receives 7 requests
The allocation order of this 7 requests is s as can be seen from the table1,s2,s1,s3,s1,s2,s1, be not in continuous
The case where request is all assigned to the same node, and s in this 7 requests1It is assigned 4 times, s2It is assigned 2 times, s3It is assigned
1 time, scheduler distribution is relatively rationally and very smooth;
Step 205, scheduler judges whether the indication information for continuing to acquire data server node, if continuing to acquire that
Step 203 is jumped to, otherwise execution exits the program.
In addition, it is contemplated that data server node is regular i.e. at regular intervals t just the indices information of itself
It is sent to scheduler, but this time interval t is unsuitable too short, too short to will lead to the frequent getter of data server node each
Item indication information can handle it reading file request in this way and performance is caused to influence, while also result in scheduler and frequently counting
Weight and adjustment weight are calculated, can also increase a large amount of burdens to scheduler in this way.Similarly time t is also unsuitable too long, and too long meeting is made
At certain retardance so that scheduler to collect data server indices information not accurate enough, can not reflect true
The loading condition of data server cluster.This is according to data server node concrete condition setting time t.
Fig. 3 is the general frame figure of the mass small documents storage access system in embodiment comprising user's access layer, number
According to access layer and data storage layer.Data access layer mainly acts on behalf of the access of user, has both served the use on upper layer
Family access layer, and it is connected the data storage layer of lower layer, it, can be according to number when coping with high concurrent small documents access request
Rational management is carried out according to the loading condition of memory node in accumulation layer.
The method of the present invention is that its performance is evaluated with following index:
1) the request processing time: for user, the request processing time refers to that user transmit a request to request and completes institute
The time needed, the value reflect server for the service quality of user, and the smaller experience for user of the value is more preferable.
2) system throughput: for system, throughput refers to the data transmitted on network in the unit time
Total amount refers to the number of request of the systems process user within the unit time in other words, is the important indicator for measuring system performance, usually
It can be measured with number of request/s.
Further, for more intuitive embodiment inventive algorithm in actual moving process to the small text of high concurrent magnanimity
Part access brought by performance boost, the present embodiment also carried out pair using static weighted polling loading algorithm and the method for the present invention
Than being illustrated in figure 4 the comparison diagram of request processing time under different concurrency, it can be seen that when concurrent amount of access is lesser
The average response time of algorithm and weighted polling load-balancing algorithm of the invention is similar or even weighted load equalization algorithm
Performance can be little higher.This is because system load at this time is lower, do not reach performance bottleneck also, each file in rear end is arrived in reflection
Server node resource utilization is not also very high.And the present invention needs regularly to collect the resource utilisation information and meter of each node
Weight is calculated, this can bring certain burden to scheduler.But with the increase of concurrent amount of access, a large amount of file access request
System load is caused to increase, and weighted polling load-balancing algorithm is not relevant for the real time load of each node in rear end at this time, only
Fixed assigns the task to each node according to the weight for starting setting.The present invention, which has fully considered, influences backend nodes performance
Dynamic factor, and scheduler is periodically fed back to, when load value is lower, its weight is slowly promoted, scheduler obtains
It takes the loading condition of backend nodes more accurate, while will not frequently change its weight, it can be preferably according to practical feelings
Condition distributes task, has better performance performance in identical high concurrent request amount.
Fig. 5 show the comparison diagram of the throughput under different concurrent amount of access, it can be seen that not high in concurrent amount of access
When two methods performance it is substantially similar, but with concurrent amount of access increase, method performance of the invention
More preferably.This is because Weighted Round Robin does not account for the real time load state of each node of distributed system in the task of distribution,
The loading high node accumulation of the task cannot be caused excessive and made a large amount of according to the distribution task of the loading level equilibrium of each node
Request cannot handle even request timed out for a long time and reduce throughput of system.And the present invention has fully considered end segment after influence
The dynamic factor of point performance and the isomery situation of cluster, and scheduler is periodically fed back to, so scheduler acquisition system is each
The loading condition of node is more accurate, preferably can reasonably distribute task according to the actual situation, visits in identical high concurrent
There is higher handling capacity in the case of the amount of asking, be applied in small documents distributed memory system, faster file access can be brought
Speed.
Embodiment of the present invention are not limited by the above embodiments, other any real without departing from spirit of the invention
Made change, simplification under matter and principle, should be equivalent substitute mode, are included within the scope of the present invention.
Claims (10)
1. one kind is based on dynamic load balancing method in the case of mass small documents high concurrent, it is characterised in that be included in following step
It is rapid:
1) each node is switched on a timed task in data server cluster, periodically collects the utilization feelings of its each resource
Condition, resource utilization include cpu busy percentage Ucpu, memory usage Umem, disk I/O utilization UioAnd network bandwidth utilization factor
Unet;With DiIndicate i-th of node in data server cluster, i-th of node DiCpu busy percentage Ucpu(i) calculation method
Are as follows:WhereinIt is time period t1~t2Interior DiThe User space cpu busy percentage and kernel state CPU of node utilize it
With,It is time period t1~t2Interior CPU time summation, t1Refer to the time of last computation, t2Refer to this time calculated;
I-th of node DiMemory usage Umem(i) it calculates are as follows:Wherein MuseIt indicates in DiNode is in t2
The memory usage amount at time point, MtotalIndicate DiTotal memory;
I-th of node DiMagnetic disc i/o utilization rate be Uio(i);
I-th of node DiNetwork bandwidth utilization factorWhereinIt is time period t1~t2Interior DiNode
Network bandwidth, netmaxIt is the maximum bandwidth of network interface card;
2) U of the data server node itselfcpu、Umem、UioAnd UnetIt is sent to dispatch server;
3) dispatch server receives the U of data server nodecpu、Umem、UioAnd UnetIt is calculated later using following formula each
The load value Load of a data server nodei:
Loadi=R1×Ucpu(i)+R2×Umem(i)+R3×Uio(i)+R4×Unet(i);
Wherein R1, R2, R3, R4CPU, memory, magnetic disc i/o and network bandwidth impact factor respectively, and meet i ∈ 1,2,3,4+,
Ri∈(0,1),
4) when Back end data server node brings into operation, according to the hardware configuration information of i-th of node at the beginning of i-th of node
The weighted value W of beginningi, WiIndicate the weighted value of i-th of node in data server cluster;
5) the real time weight value W of data server nodeiCalculate with following formula and acquire:
DescribedFor constant, indicates the error allowed when calculating, indicate if server is supported on LoadbestUpper and lower wave
It is dynamic to be no more thanWhen do not need to change its weight;WithIt is used in and calculates the coefficient that data server node weight is adjusted,
And LoadbestIt is the desired load value of system, Loadbest∈[0.7,
0.8];
6) scheduler node dynamically changes request task according to the real time weight value for calculating each data server node
Distribution, weighted value it is high will receive more request, weight it is low receive less request.
2. according to claim 1 based on dynamic load balancing method in the case of mass small documents high concurrent, feature exists
In describedWithIt is calculated by reading virtual file/proc/stat under Linux.
3. according to claim 1 based on dynamic load balancing method in the case of mass small documents high concurrent, feature exists
In the MuseAnd MtotalIt is calculated by reading virtual file/proc/meminfo under Linux.
4. according to claim 1 based on dynamic load balancing method in the case of mass small documents high concurrent, feature exists
In the Uio(i) iostat-x-d sda order is executed directly in Linux to obtain;This order can obtain one after executing
A %util parameter value, the value indicate the utilization rate of current time system disk IO.
5. according to claim 1 based on dynamic load balancing method in the case of mass small documents high concurrent, feature exists
In describedIt is calculated by reading virtual file/proc/net/dev under Linux.
6. according to claim 1 based on dynamic load balancing method in the case of mass small documents high concurrent, feature exists
In the netmaxIt is obtained using ethtool tool.
7. according to claim 1 based on dynamic load balancing method in the case of mass small documents high concurrent, feature exists
In the initial weighted value W of i-th of nodeiAdjusting range be [Wmin,Wmax], wherein Wmin=0, Wmax=max (Wi);
max(wi) it is desired load value, desired load value Loadbest∈[0.7,0.8]。
8. according to claim 1 based on dynamic load balancing method in the case of mass small documents high concurrent, feature exists
In describedTake 0.03;LoadbestTake 0.7;DescribedValue use Apache Benchmark testing tool pair
Data server cluster carries out multiple groups pressure test and practical application scene determines.
9. according to claim 1 based on dynamic load balancing method in the case of mass small documents high concurrent, feature exists
In the R1, R2, R3, R4It determines as follows:
(1) R is given1, R2, R3And R4One initial value, then carries out pressure test with Apache Benchmark;
(2) each test result is collected;
(3) multiple test result is formed into a matrix D;
(4) data in matrix are normalized to obtain decision matrix Z;
(5) processing is weighted to matrix, obtains weighted decision matrix V;
(6) ideal solution and minus ideal result are determined, i.e., selects maximum value and minimum value in each test result;
(7) distance D of each test result of calculating to ideal point+With the distance D of Negative ideal point-, definitionNext pair
Each test result is evaluated, and the bigger expression test result of C is better, at this time selected R1, R2, R3And R4Value most
It is excellent.
10. according to claim 9 based on dynamic load balancing method in the case of mass small documents high concurrent, feature exists
In the R1, R2, R3And R4Initial value setting be respectively as follows: R1=0.2, R2=0.25, R3=0.3, R4=0.25.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910418947.7A CN110149395A (en) | 2019-05-20 | 2019-05-20 | One kind is based on dynamic load balancing method in the case of mass small documents high concurrent |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910418947.7A CN110149395A (en) | 2019-05-20 | 2019-05-20 | One kind is based on dynamic load balancing method in the case of mass small documents high concurrent |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110149395A true CN110149395A (en) | 2019-08-20 |
Family
ID=67592178
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910418947.7A Pending CN110149395A (en) | 2019-05-20 | 2019-05-20 | One kind is based on dynamic load balancing method in the case of mass small documents high concurrent |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110149395A (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110691118A (en) * | 2019-08-30 | 2020-01-14 | 许昌许继软件技术有限公司 | Service selection method and device in micro-service cluster |
CN110708369A (en) * | 2019-09-25 | 2020-01-17 | 深圳市网心科技有限公司 | File deployment method and device for equipment nodes, scheduling server and storage medium |
CN111324464A (en) * | 2020-03-12 | 2020-06-23 | 北京首汽智行科技有限公司 | Load distribution method based on micro-service architecture |
CN111526208A (en) * | 2020-05-06 | 2020-08-11 | 重庆邮电大学 | High-concurrency cloud platform file transmission optimization method based on micro-service |
CN111586134A (en) * | 2020-04-29 | 2020-08-25 | 新浪网技术(中国)有限公司 | CDN node overload scheduling method and system |
CN111782626A (en) * | 2020-08-14 | 2020-10-16 | 工银科技有限公司 | Task allocation method and device, distributed system, electronic device and medium |
CN111813542A (en) * | 2020-06-18 | 2020-10-23 | 浙大宁波理工学院 | Load balancing method and device for parallel processing of large-scale graph analysis tasks |
CN113099252A (en) * | 2021-03-29 | 2021-07-09 | 浙江工业大学 | Remote feeder video pushing system based on SIP and RTMP |
CN113297027A (en) * | 2020-08-31 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Method and device for selecting computing node and database |
CN113556397A (en) * | 2021-07-21 | 2021-10-26 | 山东建筑大学 | Cloud service resource scheduling method facing gateway of Internet of things |
CN113608878A (en) * | 2021-08-18 | 2021-11-05 | 上海德拓信息技术股份有限公司 | Task distributed scheduling method and system based on resource weight calculation |
CN113886081A (en) * | 2021-09-29 | 2022-01-04 | 南京地铁建设有限责任公司 | Station multi-face-brushing array face library segmentation method based on load balancing |
CN114443247A (en) * | 2021-12-29 | 2022-05-06 | 天翼云科技有限公司 | Task scheduling method and device |
CN114567637A (en) * | 2022-03-01 | 2022-05-31 | 浪潮云信息技术股份公司 | Method and system for intelligently setting weight of load balancing back-end server |
CN115208889A (en) * | 2022-05-12 | 2022-10-18 | 国家信息中心 | High-concurrency high-flow video safety isolation transmission method and system |
CN115629717A (en) * | 2022-12-08 | 2023-01-20 | 四川汉唐云分布式存储技术有限公司 | Load balancing method based on distributed storage and storage medium |
WO2023155703A1 (en) * | 2022-02-18 | 2023-08-24 | 华为技术有限公司 | Workload feature extraction method and apparatus |
CN117155871A (en) * | 2023-10-31 | 2023-12-01 | 山东衡昊信息技术有限公司 | Port industrial Internet point position low-delay concurrent processing method |
CN117880206A (en) * | 2024-03-12 | 2024-04-12 | 深圳市艾奥科技有限公司 | Load balancing method and system for Internet of things management equipment |
WO2024082861A1 (en) * | 2022-10-20 | 2024-04-25 | 天翼数字生活科技有限公司 | Cloud storage scheduling system applied to video monitoring |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070143460A1 (en) * | 2005-12-19 | 2007-06-21 | International Business Machines Corporation | Load-balancing metrics for adaptive dispatching of long asynchronous network requests |
US8949410B2 (en) * | 2010-09-10 | 2015-02-03 | Cisco Technology, Inc. | Server load balancer scaling for virtual servers |
WO2016133965A8 (en) * | 2015-02-18 | 2016-10-13 | KEMP Technologies Inc. | Methods for intelligent data traffic steering |
CN107645520A (en) * | 2016-07-21 | 2018-01-30 | 阿里巴巴集团控股有限公司 | A kind of load-balancing method, device and system |
CN108200156A (en) * | 2017-12-29 | 2018-06-22 | 南京邮电大学 | The dynamic load balancing method of distributed file system under a kind of cloud environment |
CN108667878A (en) * | 2017-03-31 | 2018-10-16 | 北京京东尚科信息技术有限公司 | Server load balancing method and device, storage medium, electronic equipment |
CN109120715A (en) * | 2018-09-21 | 2019-01-01 | 华南理工大学 | Dynamic load balancing method under a kind of cloud environment |
CN109710412A (en) * | 2018-12-28 | 2019-05-03 | 广州市巨硅信息科技有限公司 | A kind of Nginx load-balancing method based on dynamical feedback |
-
2019
- 2019-05-20 CN CN201910418947.7A patent/CN110149395A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070143460A1 (en) * | 2005-12-19 | 2007-06-21 | International Business Machines Corporation | Load-balancing metrics for adaptive dispatching of long asynchronous network requests |
US8949410B2 (en) * | 2010-09-10 | 2015-02-03 | Cisco Technology, Inc. | Server load balancer scaling for virtual servers |
WO2016133965A8 (en) * | 2015-02-18 | 2016-10-13 | KEMP Technologies Inc. | Methods for intelligent data traffic steering |
CN107645520A (en) * | 2016-07-21 | 2018-01-30 | 阿里巴巴集团控股有限公司 | A kind of load-balancing method, device and system |
CN108667878A (en) * | 2017-03-31 | 2018-10-16 | 北京京东尚科信息技术有限公司 | Server load balancing method and device, storage medium, electronic equipment |
CN108200156A (en) * | 2017-12-29 | 2018-06-22 | 南京邮电大学 | The dynamic load balancing method of distributed file system under a kind of cloud environment |
CN109120715A (en) * | 2018-09-21 | 2019-01-01 | 华南理工大学 | Dynamic load balancing method under a kind of cloud environment |
CN109710412A (en) * | 2018-12-28 | 2019-05-03 | 广州市巨硅信息科技有限公司 | A kind of Nginx load-balancing method based on dynamical feedback |
Non-Patent Citations (2)
Title |
---|
杨玉霞: "基于Nginx的地理信息服务集群构建技术研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 * |
熊建波: "FastDFS分布式文件系统负载均衡算法的改进研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 * |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110691118A (en) * | 2019-08-30 | 2020-01-14 | 许昌许继软件技术有限公司 | Service selection method and device in micro-service cluster |
CN110708369B (en) * | 2019-09-25 | 2022-09-16 | 深圳市网心科技有限公司 | File deployment method and device for equipment nodes, scheduling server and storage medium |
CN110708369A (en) * | 2019-09-25 | 2020-01-17 | 深圳市网心科技有限公司 | File deployment method and device for equipment nodes, scheduling server and storage medium |
CN111324464A (en) * | 2020-03-12 | 2020-06-23 | 北京首汽智行科技有限公司 | Load distribution method based on micro-service architecture |
CN111586134A (en) * | 2020-04-29 | 2020-08-25 | 新浪网技术(中国)有限公司 | CDN node overload scheduling method and system |
CN111526208A (en) * | 2020-05-06 | 2020-08-11 | 重庆邮电大学 | High-concurrency cloud platform file transmission optimization method based on micro-service |
CN111813542A (en) * | 2020-06-18 | 2020-10-23 | 浙大宁波理工学院 | Load balancing method and device for parallel processing of large-scale graph analysis tasks |
CN111813542B (en) * | 2020-06-18 | 2024-02-13 | 浙大宁波理工学院 | Load balancing method and device for parallel processing of large-scale graph analysis task |
CN111782626A (en) * | 2020-08-14 | 2020-10-16 | 工银科技有限公司 | Task allocation method and device, distributed system, electronic device and medium |
CN113297027A (en) * | 2020-08-31 | 2021-08-24 | 阿里巴巴集团控股有限公司 | Method and device for selecting computing node and database |
CN113099252A (en) * | 2021-03-29 | 2021-07-09 | 浙江工业大学 | Remote feeder video pushing system based on SIP and RTMP |
CN113556397A (en) * | 2021-07-21 | 2021-10-26 | 山东建筑大学 | Cloud service resource scheduling method facing gateway of Internet of things |
CN113556397B (en) * | 2021-07-21 | 2022-05-06 | 山东建筑大学 | Cloud service resource scheduling method facing gateway of Internet of things |
CN113608878A (en) * | 2021-08-18 | 2021-11-05 | 上海德拓信息技术股份有限公司 | Task distributed scheduling method and system based on resource weight calculation |
CN113886081A (en) * | 2021-09-29 | 2022-01-04 | 南京地铁建设有限责任公司 | Station multi-face-brushing array face library segmentation method based on load balancing |
CN114443247A (en) * | 2021-12-29 | 2022-05-06 | 天翼云科技有限公司 | Task scheduling method and device |
WO2023155703A1 (en) * | 2022-02-18 | 2023-08-24 | 华为技术有限公司 | Workload feature extraction method and apparatus |
CN114567637A (en) * | 2022-03-01 | 2022-05-31 | 浪潮云信息技术股份公司 | Method and system for intelligently setting weight of load balancing back-end server |
CN115208889A (en) * | 2022-05-12 | 2022-10-18 | 国家信息中心 | High-concurrency high-flow video safety isolation transmission method and system |
CN115208889B (en) * | 2022-05-12 | 2023-11-28 | 国家信息中心 | High-concurrency large-flow video safety isolation transmission method and system |
WO2024082861A1 (en) * | 2022-10-20 | 2024-04-25 | 天翼数字生活科技有限公司 | Cloud storage scheduling system applied to video monitoring |
CN115629717A (en) * | 2022-12-08 | 2023-01-20 | 四川汉唐云分布式存储技术有限公司 | Load balancing method based on distributed storage and storage medium |
CN117155871A (en) * | 2023-10-31 | 2023-12-01 | 山东衡昊信息技术有限公司 | Port industrial Internet point position low-delay concurrent processing method |
CN117155871B (en) * | 2023-10-31 | 2024-01-12 | 山东衡昊信息技术有限公司 | Port industrial Internet point position low-delay concurrent processing method |
CN117880206A (en) * | 2024-03-12 | 2024-04-12 | 深圳市艾奥科技有限公司 | Load balancing method and system for Internet of things management equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110149395A (en) | One kind is based on dynamic load balancing method in the case of mass small documents high concurrent | |
Yeung et al. | Horus: Interference-aware and prediction-based scheduling in deep learning systems | |
US11558244B2 (en) | Improving performance of multi-processor computer systems | |
CN109976917B (en) | Load scheduling method, device, load scheduler, storage medium and system | |
US9703285B2 (en) | Fair share scheduling for mixed clusters with multiple resources | |
CN109120715A (en) | Dynamic load balancing method under a kind of cloud environment | |
WO2020206705A1 (en) | Cluster node load state prediction-based job scheduling method | |
CN107832153B (en) | Hadoop cluster resource self-adaptive allocation method | |
US5537542A (en) | Apparatus and method for managing a server workload according to client performance goals in a client/server data processing system | |
US8683472B2 (en) | Adjusting thread priority to optimize computer system performance and the utilization of computer system resources | |
US10534542B2 (en) | Dynamic core allocation for consistent performance in a non-preemptive scheduling environment | |
CN103294546B (en) | The online moving method of virtual machine of multi-dimensional resource performance interference aware and system | |
US9081621B2 (en) | Efficient input/output-aware multi-processor virtual machine scheduling | |
CN112835698B (en) | Dynamic load balancing method for request classification processing based on heterogeneous clusters | |
US20120005685A1 (en) | Information Processing Grid and Method for High Performance and Efficient Resource Utilization | |
US10305724B2 (en) | Distributed scheduler | |
US12093530B2 (en) | Workload management using a trained model | |
CN109981419A (en) | Test method, device, system, equipment and the storage medium of load balancing characteristic | |
CN109032800A (en) | A kind of load equilibration scheduling method, load balancer, server and system | |
CN115562870A (en) | Method for constructing task node resources of cluster | |
Lu et al. | InSTechAH: Cost-effectively autoscaling smart computing hadoop cluster in private cloud | |
CN110471761A (en) | Control method, user equipment, storage medium and the device of server | |
US10282140B2 (en) | I/O workload scheduling manager for RAID/non-RAID flash based storage systems for TCO and WAF optimizations | |
CN110928649A (en) | Resource scheduling method and device | |
KR101394365B1 (en) | Apparatus and method for allocating processor in virtualization environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190820 |
|
WD01 | Invention patent application deemed withdrawn after publication |