CN102970241A - Multipath load balancing method and multipath load balancing device - Google Patents
Multipath load balancing method and multipath load balancing device Download PDFInfo
- Publication number
- CN102970241A CN102970241A CN2012104420690A CN201210442069A CN102970241A CN 102970241 A CN102970241 A CN 102970241A CN 2012104420690 A CN2012104420690 A CN 2012104420690A CN 201210442069 A CN201210442069 A CN 201210442069A CN 102970241 A CN102970241 A CN 102970241A
- Authority
- CN
- China
- Prior art keywords
- network
- output
- path
- layer node
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention provides a multipath load balancing method and a multipath load balancing device and relates to the field of computer application. The problem that the load balancing of nonlinear input/output (IO) access cannot be achieved is solved. The method comprises steps of acquiring the IO size and the IO residual quantity and the IO responding time of every path; adjusting the weighing of every path in accordance with the output of a back propagation (BP) network; and determining the path to which a next IO is sent in accordance with the adjusted weighing. According to the aid of the technical scheme, the device and the method are suitable to a load balancing network, and the load balancing of the high IO throughput rate is achieved.
Description
Technical field
The present invention relates to computer application field, relate in particular to a kind of load balance method and apparatus.
Background technology
Along with the development of computer technology, the user is more and more higher to the requirement of the availability of computer system.Here so-called availability comprises the every aspect of computer system, is not only computer itself, also comprises equipment of storing data etc.At present, all accomplished high availablely computer system at all levels, as available for the cluster height that calculates, can guarantee that software fault or node other node of delaying in the situation of machine can take over service; RAID for disk can guarantee still can carry out read and write access to disk in the situation of some disk failure; For the multi-path software in path, can guarantee that in the situation of switch or line fault, computing node still can be accessed disk array.
Load-balancing algorithm commonly used has minimum queue depth, weight path and repeating query scheduling algorithm at present, these algorithm function are relatively single, can only adapt to input and output under certain applicable cases (IO) access, so multi-path software needs according to being used for the configuration load equalization algorithm.But when the IO access regularity of using is not strong, during perhaps without any linear rule, these algorithms will be no longer applicable.
Summary of the invention
The invention provides a kind of load balance method and apparatus, solved the problem that non-linear IO access can't realize load balancing.
A kind of load balance method comprises:
Obtain surplus and the IO response time of IO on the size, each path of IO;
Output according to the BP network is adjusted the weight of every paths;
Determine the path that next IO should mail to according to the weight after adjusting.
Preferably, before the surplus and the step of IO response time of IO, also comprise on the described size of obtaining IO, each path:
The described multilayer feedforward of initialization (BP) network internal parameter weights.
Preferably, described BP network comprises input layer, output layer node and one or more hidden layer node, comprises according to the output of the BP network weight adjustment to every paths:
Described input layer with the surplus of IO on the size of described IO, each path and IO response time as input message, propagate into forward described hidden layer node;
Described hidden layer node is according to action function
To described input message effect processing, and export the result to described output layer node, wherein, Q is neuronic nonlinear interaction function, and x is input message;
Described output layer node Output rusults.
Preferably, according to the output of described BP network the weight of every paths is adjusted also and is comprised:
When the output of described BP network is not the input of expectation, change backpropagation over to;
Error signal is returned along original interface channel, revised in the described BP network the neuronic weights of each layer so that error signal is minimum.
Preferably, the size of described IO is between 512B to 256KB.
The present invention also provides a kind of load balance device, comprising:
Information acquisition module is for surplus and the IO response time of IO on the size of obtaining IO, each path;
BP network self-learning module is used for according to the output of BP network the weight of every paths being adjusted;
The IO distribution module is used for determining the path that next IO should mail to according to the weight after adjusting.
Preferably, said apparatus also comprises initialization module, is used for the described BP network internal of initialization parameter weights.
Preferably, described BP network comprises input layer, output layer node and one or more hidden layer node, and BP network self-learning module comprises:
The input layer processing unit is used for the surplus of IO on the size, each path with described IO and IO response time as input message, propagates into forward described hidden layer node;
The hidden layer node processing unit is used for according to action function
To described input message effect processing, and export the result to described output layer node, wherein, Q is neuronic nonlinear interaction function, and x is input message;
Output layer node processing unit is used for Output rusults.
Preferably, described BP network self-learning module also comprises:
Switch unit is used for changing backpropagation over to when the output of described BP network is not the input of expectation;
Weight modification unit is used for error signal is returned along original interface channel, revises in the described BP network the neuronic weights of each layer so that error signal is minimum.
The invention provides a kind of load balance method and apparatus, obtain surplus and the IO response time of IO on the size, each path of IO, according to the output of BP network the weight of every paths is adjusted again, then determine the path that next IO should mail to according to the weight after adjusting, realize the load balancing of high IO throughput, solved the problem that non-linear IO access can't realize load balancing.
Description of drawings
The flow chart of a kind of load balance method that Fig. 1 provides for embodiments of the invention one;
Fig. 2 is BP schematic network structure in the embodiments of the invention;
The structural representation of a kind of load balance device that Fig. 3 provides for embodiments of the invention two;
Fig. 4 is the structural representation of BP network self-learning module 302 among Fig. 3;
Fig. 5 is the test design sketch of the embodiment of the invention.
Embodiment
Load-balancing algorithm commonly used has minimum queue depth, weight path and repeating query scheduling algorithm at present, and these algorithm function are relatively single, can only adapt to IO access under certain applicable cases, so multi-path software needs according to being used for the configuration load equalization algorithm.But when the IO access regularity of using is not strong, during perhaps without any linear rule, these algorithms will be no longer applicable.
In order to address the above problem, embodiments of the invention provide a kind of load balance method and apparatus, can be according to the weights in each path of adjustment of the IO Data Dynamic of using, thus guarantee that IO can be as soon as possible falls by each path is processed, improves IO throughput and dispatching efficiency.
Hereinafter in connection with accompanying drawing embodiments of the invention are elaborated.Need to prove that in the situation of not conflicting, the embodiment among the application and the feature among the embodiment be combination in any mutually.
At first by reference to the accompanying drawings, embodiments of the invention one are described.
The embodiment of the invention provides a kind of load balance method, uses flow process that the method finishes load balance as shown in Figure 1, comprising:
The employed BP network configuration of embodiments of the invention as shown in Figure 2, not only there are input layer, output layer node in this BP networking, also one or more hidden layer nodes can be arranged.For input signal, to propagate into forward hidden layer node first, behind action function, again the output signal of hidden node is propagated into output node, provide at last Output rusults.The excitation function of the effect of node is chosen the S type function usually, as
Q is for adjusting the Sigmoid parameter of excitation function form in the formula, and x is input message.
Surplus and the IO response time of IO on step 102, the size of obtaining IO, each path;
In this step, need to obtain surplus and the IO response time of IO on the size, each path of IO.
In this step, go out the weight in each path by the BP network calculations, like this, the distribution module of load balancing just can be determined the path that next IO should mail to according to weight.
Concrete, with the IO size information of input, IO remaining information and the IO response time on each path, then by obtaining the weight in each path after the BP network reasoning, can adjust simultaneously each neuronic information in the BP network, because this neural net has the function of adjusting internal information, it also is the function of self study, therefore through after the processing of some IO, this network can be realized the memory to application layer IO rule, thereby the weight in each path is adjusted to the value of an optimum, thereby guarantee that computing node reaches best performance to the access of storage.
In the embodiments of the invention, the self study process of BP network is comprised of forward-propagating and two processes of backpropagation.In the forward-propagating process, input message is successively processed through hidden layer from input layer, and passes to output layer, and the neuronic state of every one deck only affects the neuronic state of lower one deck.If the output that output layer can not get expecting then changes backpropagation over to, error signal is returned along original interface channel, by revising the neuronic weights of each layer, so that error signal is minimum.
Below, in conjunction with a concrete weight calculation example, the self-learning algorithm of BP network in the embodiments of the invention is described.
If contain the arbitrary network of n node, the characteristic of each node is the Sigmoid type.For simplicity, specified network only has an output y, and arbitrary node i is output as O
i, and be provided with N sample (x
k, y
k) (k=1,2,3 ..., N), to a certain input x
k(element in the sample), corresponding network is output as y
k, output y
kNode i be output as O
Ik, node j is input as:
And error function is defined as:
When j is output node,
If j is not output node, then have
Therefore,
Suppose that the BP network has the M layer, and the M layer only contains output node, ground floor is the input node, and then the BP algorithm is:
The first step is chosen initial weight W, and generally speaking, the default value of W is 0.
Second step, repeat following process until convergence:
1) for k=1 to N (N<M)
A). calculate O
Ik, net
JkWith
Value (forward process);
B). to each layer from M to 2 backwards calculation (reverse procedure).2) to same node j ∈ M, calculate δ
Jk
The 3rd step, revise weights,
μ>0, wherein
The BP algorithm that the embodiment of the invention adopts becomes a nonlinear optimal problem to the I/O problem of one group of sample, if regarding as of neural net is input to the mapping of output, then this mapping is a nonlinear.
In this step, according to every paths weight, specific IO sent to certain paths, carry out the transmission of IO by it.
Below in conjunction with accompanying drawing, embodiments of the invention two are described.
The embodiment of the invention provides a kind of load balance device, and its structure comprises as shown in Figure 3
BP network self-learning module 302 is used for according to the output of BP network the weight of every paths being adjusted;
Preferably, this device also comprises initialization module 304, is used for the described BP network internal of initialization parameter weights.
Preferably, described BP network comprises input layer, output layer node and one or more hidden layer node, and BP network self-learning module 302 comprises as shown in Figure 4:
Input layer processing unit 3021 is used for the surplus of IO on the size, each path with described IO and IO response time as input message, propagates into forward described hidden layer node;
Hidden layer node processing unit 3022 is used for according to action function
To described input message effect processing, and export the result to described output layer node, wherein, Q is neuronic nonlinear interaction function, and x is transmission information;
Output layer node processing unit 3023 is used for Output rusults.
Preferably, described BP network self-learning module 302 also comprises:
The core of the embodiment of the invention is BP network self-learning module, can well adapt to the situation that the application layer read and write access changes, when the IO access rule of application layer changes, the weight in each path can be learnt and adjust to the load balance method and apparatus that embodiments of the invention provide timely, thereby guarantee that computing node is to the best performance of memory access.
Fig. 5 is the test design sketch of the embodiment of the invention, provides the sequential write of IO size change at random, and the IO size is limited to 512B between the 256KB.As seen in Figure 5, after the adjustment through the BP network, the IO throughput has reached the extreme value sum in two paths substantially.
Embodiments of the invention provide a kind of load balance method and apparatus, obtain surplus and the IO response time of IO on the size, each path of IO, according to the output of BP network the weight of every paths is adjusted again, then determine the path that next IO should mail to according to the weight after adjusting, realize the load balancing of high IO throughput, solved the problem that non-linear IO access can't realize load balancing.
The all or part of step that one of ordinary skill in the art will appreciate that above-described embodiment can realize with the computer program flow process, described computer program can be stored in the computer-readable recording medium, described computer program (such as system, unit, device etc.) on corresponding hardware platform is carried out, when carrying out, comprise step of embodiment of the method one or a combination set of.
Alternatively, all or part of step of above-described embodiment can realize with integrated circuit that also these steps can be made into respectively one by one integrated circuit modules, perhaps a plurality of modules in them or step is made into the single integrated circuit module and realizes.Like this, the present invention is not restricted to any specific hardware and software combination.
Each device/functional module/functional unit in above-described embodiment can adopt general calculation element to realize, they can concentrate on the single calculation element, also can be distributed on the network that a plurality of calculation elements form.
Each device/functional module/functional unit in above-described embodiment is realized with the form of software function module and during as independently production marketing or use, can be stored in the computer read/write memory medium.The above-mentioned computer read/write memory medium of mentioning can be read-only memory, disk or CD etc.
Anyly be familiar with those skilled in the art in the technical scope that the present invention discloses, can expect easily changing or replacing, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the described protection range of claim.
Claims (9)
1. a load balance method is characterized in that, comprising:
Obtain surplus and the IO response time of IO on the size, each path of input and output (IO);
Output according to multilayer feedforward (BP) network is adjusted the weight of every paths;
Determine the path that next IO should mail to according to the weight after adjusting.
2. load balance method according to claim 1 is characterized in that, before the surplus and the step of IO response time of IO, also comprises on the described size of obtaining IO, each path:
The described BP network internal of initialization parameter weights.
3. load balance method according to claim 2 is characterized in that, described BP network comprises input layer, output layer node and one or more hidden layer node, comprises according to the output of the BP network weight adjustment to every paths:
Described input layer with the surplus of IO on the size of described IO, each path and IO response time as input message, propagate into forward described hidden layer node;
Described hidden layer node is according to action function
To described input message effect processing, and export the result to described output layer node, wherein, Q is neuronic nonlinear interaction function, and x is input message;
Described output layer node Output rusults.
4. load balance method according to claim 3 is characterized in that, according to the output of described BP network the weight of every paths is adjusted also to comprise:
When the output of described BP network is not the input of expectation, change backpropagation over to;
Error signal is returned along original interface channel, revised in the described BP network the neuronic weights of each layer so that error signal is minimum.
5. load balance method according to claim 1 is characterized in that, the size of described IO is between 512B to 256KB.
6. a load balance device is characterized in that, comprising:
Information acquisition module is for surplus and the IO response time of IO on the size of obtaining IO, each path;
BP network self-learning module is used for according to the output of BP network the weight of every paths being adjusted;
The IO distribution module is used for determining the path that next IO should mail to according to the weight after adjusting.
7. load balance device according to claim 6 is characterized in that, this device also comprises initialization module, is used for the described BP network internal of initialization parameter weights.
8. the load balance device of intending according to claim 7 is characterized in that described BP network comprises input layer, output layer node and one or more hidden layer node, and BP network self-learning module comprises:
The input layer processing unit is used for the surplus of IO on the size, each path with described IO and IO response time as input message, propagates into forward described hidden layer node;
The hidden layer node processing unit is used for according to action function
To described input message effect processing, and export the result to described output layer node, wherein, Q is neuronic nonlinear interaction function, and x is input message;
Output layer node processing unit is used for Output rusults.
9. load balance device according to claim 6 is characterized in that, described BP network self-learning module also comprises:
Switch unit is used for changing backpropagation over to when the output of described BP network is not the input of expectation;
Weight modification unit is used for error signal is returned along original interface channel, revises in the described BP network the neuronic weights of each layer so that error signal is minimum.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012104420690A CN102970241A (en) | 2012-11-07 | 2012-11-07 | Multipath load balancing method and multipath load balancing device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012104420690A CN102970241A (en) | 2012-11-07 | 2012-11-07 | Multipath load balancing method and multipath load balancing device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102970241A true CN102970241A (en) | 2013-03-13 |
Family
ID=47800128
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2012104420690A Pending CN102970241A (en) | 2012-11-07 | 2012-11-07 | Multipath load balancing method and multipath load balancing device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102970241A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103236985A (en) * | 2013-04-02 | 2013-08-07 | 浪潮电子信息产业股份有限公司 | Multipath load balancing system for accessing storage |
CN103324444A (en) * | 2013-05-24 | 2013-09-25 | 浪潮电子信息产业股份有限公司 | Host terminal and storage terminal synergetic multi-control IO dispatch method |
CN104679575A (en) * | 2013-11-28 | 2015-06-03 | 阿里巴巴集团控股有限公司 | Control system and control method for input and output flow |
CN106293533A (en) * | 2016-08-11 | 2017-01-04 | 浪潮(北京)电子信息产业有限公司 | A kind of based on the load-balancing method between storage multipath and system |
CN113608690A (en) * | 2021-07-17 | 2021-11-05 | 济南浪潮数据技术有限公司 | Method, device and equipment for iscsi target multipath grouping and readable medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101695050A (en) * | 2009-10-19 | 2010-04-14 | 浪潮电子信息产业股份有限公司 | Dynamic load balancing method based on self-adapting prediction of network flow |
CN102647760A (en) * | 2012-03-02 | 2012-08-22 | 黄东 | Multi-service-network-based efficient service resource management method |
CN102761601A (en) * | 2012-05-30 | 2012-10-31 | 浪潮电子信息产业股份有限公司 | MPIO (Multiple Path Input/Output) polling method based on dynamic weighting paths |
-
2012
- 2012-11-07 CN CN2012104420690A patent/CN102970241A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101695050A (en) * | 2009-10-19 | 2010-04-14 | 浪潮电子信息产业股份有限公司 | Dynamic load balancing method based on self-adapting prediction of network flow |
CN102647760A (en) * | 2012-03-02 | 2012-08-22 | 黄东 | Multi-service-network-based efficient service resource management method |
CN102761601A (en) * | 2012-05-30 | 2012-10-31 | 浪潮电子信息产业股份有限公司 | MPIO (Multiple Path Input/Output) polling method based on dynamic weighting paths |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103236985A (en) * | 2013-04-02 | 2013-08-07 | 浪潮电子信息产业股份有限公司 | Multipath load balancing system for accessing storage |
CN103324444A (en) * | 2013-05-24 | 2013-09-25 | 浪潮电子信息产业股份有限公司 | Host terminal and storage terminal synergetic multi-control IO dispatch method |
CN103324444B (en) * | 2013-05-24 | 2017-09-22 | 浪潮电子信息产业股份有限公司 | A kind of many control I O scheduling methods that host side is cooperateed with storage end |
CN104679575A (en) * | 2013-11-28 | 2015-06-03 | 阿里巴巴集团控股有限公司 | Control system and control method for input and output flow |
CN104679575B (en) * | 2013-11-28 | 2018-08-24 | 阿里巴巴集团控股有限公司 | The control system and its method of iostream |
CN106293533A (en) * | 2016-08-11 | 2017-01-04 | 浪潮(北京)电子信息产业有限公司 | A kind of based on the load-balancing method between storage multipath and system |
CN113608690A (en) * | 2021-07-17 | 2021-11-05 | 济南浪潮数据技术有限公司 | Method, device and equipment for iscsi target multipath grouping and readable medium |
CN113608690B (en) * | 2021-07-17 | 2023-12-26 | 济南浪潮数据技术有限公司 | Method, device, equipment and readable medium for iscsi target multipath grouping |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yu et al. | Some stability properties of dynamic neural networks | |
CN102970241A (en) | Multipath load balancing method and multipath load balancing device | |
CN101873224A (en) | Cloud computing load balancing method and equipment | |
CN107203807A (en) | The computational methods of neutral net, system and its apparatus | |
CN103595805A (en) | Data placement method based on distributed cluster | |
EP3975056A1 (en) | Neural network weight distribution using a tree direct-memory access (dma) bus | |
CN112272381A (en) | Satellite network task deployment method and system | |
CN103530317A (en) | Energy consumption adaptive type replication managing method used in cloud storage system | |
CN114429195B (en) | Performance optimization method and device for training mixed expert model | |
CN109543818A (en) | A kind of link evaluation method and system based on deep learning model | |
TWI758223B (en) | Computing method with dynamic minibatch sizes and computing system and computer-readable storage media for performing the same | |
Nguyen et al. | ESNemble: an Echo State Network-based ensemble for workload prediction and resource allocation of Web applications in the cloud | |
Lin et al. | Persistent disturbance attenuation properties for networked control systems | |
CN114721993A (en) | Many-core processing device, data processing method, data processing equipment and medium | |
Zhao et al. | Optimization partial mission abandonment strategy for k-out-of-n multi-state system | |
Van Leeuwaarden et al. | Quasi-birth-and-death processes, lattice path counting, and hypergeometric functions | |
CN101340458B (en) | Grid data copy generation method based on time and space limitation | |
CN111626410B (en) | Sparse convolutional neural network accelerator and calculation method | |
JP2021158591A (en) | Control amount calculation device and control amount calculation method | |
CN114816755A (en) | Scheduling method, scheduling device, processing core, electronic device and readable medium | |
WO2021095512A1 (en) | Machine learning device, information processing method, and recording medium | |
CN110234167B (en) | Channel allocation method, channel allocation device and electronic equipment | |
KR102211604B1 (en) | GPU-based AI system using channel-level architecture search for deep neural networks | |
JP3757722B2 (en) | Multi-layer neural network unit optimization method and apparatus | |
Ghatee | QoS-based cooperative algorithm for integral multi-commodity flow problem |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130313 |