CN107517243A - Request scheduling method and device - Google Patents
Request scheduling method and device Download PDFInfo
- Publication number
- CN107517243A CN107517243A CN201610511615.XA CN201610511615A CN107517243A CN 107517243 A CN107517243 A CN 107517243A CN 201610511615 A CN201610511615 A CN 201610511615A CN 107517243 A CN107517243 A CN 107517243A
- Authority
- CN
- China
- Prior art keywords
- url request
- request
- url
- content
- cache
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/56—Provisioning of proxy services
- H04L67/568—Storing data temporarily at an intermediate stage, e.g. caching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Information Transfer Between Computers (AREA)
- Computer And Data Communications (AREA)
Abstract
The invention provides a kind of request scheduling method and device.Wherein, this method includes:The uniform resource position mark URL request that receiving terminal is sent;Judge whether the content that the URL request is asked is Hot Contents;It is in the case that the content that the URL request is asked is Hot Contents in judged result, the URL request is dispatched to the second caching server in addition to the first caching server once allocated before the URL request.By the present invention, solve the problems, such as be in correlation technique same URL request distribute separate unit cache equipment caused by same cache equipment load it is higher, and then the effect of the same cache machine utilizations of mitigation.
Description
Technical field
The present invention relates to the communications field, in particular to a kind of request scheduling method and device.
Background technology
As content distributing network (Content Delivery Network, referred to as CDN) is in each row
The extensive use of industry, the CDN network architecture is also increasingly by familiar.Simplest CDN
It is responsible for global balanced one cache server cache of DNS and each node for one, so
Run work.
With the popularization of internet and increasing, the separate unit on a node of smart phone user
Cache can not meet bearing load, it is necessary to more cache while work, share loads, now
Server load balancing (Server load Balancing, referred to as SLB) equipment is also required to coordinate
More cache are operated.
Existing scheme is primarily present two problems:1) same file content requests, no matter the content is not
It is Hot Contents, all only can asks to service to same cache equipment scheduling, considerably increase a certain
Platform cache bearing load, high concurrent scene can not be supported;2) if some cache can not work
When, when corresponding to the content stored on the cache it is requested come when, then need Hui Yuan again, drop significantly
Low flow of services.
For Hot Contents, the load of separate unit cache equipment how is reduced, multiple devices is cooperated,
Flow of services and user's sensory experience are improved, accelerates promoting the use of for business for network-caching, there is weight
The Research Significance wanted.Prior art has yet to be improved and developed.
For in correlation technique, the separate unit caused by same cache equipment is distributed for same URL request
The problem of load of cache equipment is higher, not yet propose effective solution.
The content of the invention
The embodiments of the invention provide a kind of request scheduling method and device, at least to solve in correlation technique
The load that the separate unit cache equipment caused by same cache equipment is distributed for same URL request is higher
The problem of.
According to one embodiment of present invention, there is provided a kind of request scheduling method, including:Receiving terminal
The uniform resource position mark URL request of transmission;Judge whether the content that the URL request is asked is heat
Point content;, will in the case where the content that judged result is asked by the URL request is Hot Contents
The URL request is dispatched in addition to the first caching server once allocated before the URL request
Second caching server.
Alternatively, first be once allocated before the URL request is dispatched to except the URL request
Before the second caching server outside caching server, in addition to:The URL that record terminal is sent every time
Request, and the network protocol IP address for the caching server of the URL request distribution first.
Alternatively, it is described that the URL request is dispatched to except before the URL request once allocated the
The second caching server outside one caching server includes:Judge whether the URL request is to send out first
Send;It is in the case that the URL request is not to send first, according to the URL request in judged result
The IP address of the preceding caching server being once allocated, the URL request is dispatched to except the IP
Caching server outside location.
Alternatively, when judging content that the URL request is asked for Hot Contents, methods described is also
Including:Content Copy Info is issued to the caching server for needing to carry out Hot Contents duplication, so as to receive
Caching server to the content Copy Info carries out Hot Contents duplication;Wherein, the content replicates
Information includes at least one of:The URL of the Hot Contents, the caching for having cached the Hot Contents
The network protocol IP address of server.
Alternatively, methods described also includes:The content asked in judged result by the URL request is not
In the case of being Hot Contents, Hash is carried out according to the unique ID of URL request institute request content
Calculate, to be defined as the caching server of the URL request service.
According to another embodiment of the invention, there is provided one kind request dispatching device, including:Receive mould
Block, the uniform resource position mark URL request sent for receiving terminal;Judge module, for judging
State whether the content that URL request is asked is Hot Contents;Scheduler module, for being institute in judged result
State in the case that the content that URL request asked is Hot Contents, the URL request is dispatched to except institute
State the second caching server outside the first caching server being once allocated before URL request.
Alternatively, described device also includes:Logging modle, for removing the URL request is dispatched to
Before the second caching server before the URL request outside the first caching server once allocated,
The URL request that record terminal is sent every time, and the caching server for the URL request distribution first
Network protocol IP address.
Alternatively, the scheduler module includes:Judging unit, for judge the URL request whether be
Send first;Scheduling unit, for being that the URL request is not situation about sending first in judged result
Under, according to the IP address of caching server once allocated before the URL request, by the URL
Request is dispatched to the caching server in addition to the IP address.
Alternatively, described device also includes:Processing module, for judging that the URL request is asked
Content when being Hot Contents, replicated to needing the caching server for carrying out Hot Contents duplication to issue content
Information, so that the caching server for receiving the content Copy Info carries out Hot Contents duplication;Wherein,
The content Copy Info includes at least one of:The URL of the Hot Contents, the heat is cached
The network protocol IP address of the caching server of point content.
Alternatively, described device also includes:Computing module, for being the URL request in judged result
In the case that the content asked not is Hot Contents, according to the unique of URL request institute request content
Identify ID and carry out Hash calculation, to be defined as the caching server of the URL request service.
According to still another embodiment of the invention, a kind of storage medium is additionally provided.The storage medium is set
To store the program code for being used for performing following steps:The uniform resource position mark URL that receiving terminal is sent
Request;Judge whether the content that the URL request is asked is Hot Contents;It is described in judged result
In the case that the content that URL request is asked is Hot Contents, the URL request is dispatched to except described
The second caching server before URL request outside the first caching server once allocated.
By the present invention, the uniform resource position mark URL request that receiving terminal is sent;Judge that the URL please
Ask whether asked content is Hot Contents;It is that the content that the URL request is asked is in judged result
In the case of Hot Contents, the URL request is dispatched to except first be once allocated before the URL request
The second caching server outside caching server.And then it is that same URL request is equal to solve in correlation technique
Distribute the separate unit cache equipment caused by same cache equipment load it is higher the problem of, reach mitigation
The effect of same cache machine utilizations.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, forms one of the application
Point, schematic description and description of the invention is used to explain the present invention, does not form to the present invention's
Improper restriction.In the accompanying drawings:
Fig. 1 is request scheduling method flow chart according to embodiments of the present invention;
Fig. 2 is service load balancing slb scheduling flow schematic diagrames according to embodiments of the present invention;
Fig. 3 is the flow diagram that cache equipment rooms according to embodiments of the present invention carry out Hot Contents duplication
It is intended to;
Fig. 4 is cache device pollings service procedure schematic diagram according to embodiments of the present invention;
Fig. 5 is the structured flowchart of request dispatching device according to embodiments of the present invention;
Fig. 6 is the structured flowchart (one) of request dispatching device according to embodiments of the present invention;
Fig. 7 is the structured flowchart (two) of request dispatching device according to embodiments of the present invention;
Fig. 8 is the structured flowchart (three) of request dispatching device according to embodiments of the present invention;
Fig. 9 is the structured flowchart (four) of request dispatching device according to embodiments of the present invention.
Embodiment
Describe the present invention in detail below with reference to accompanying drawing and in conjunction with the embodiments.It should be noted that
In the case of not conflicting, the feature in embodiment and embodiment in the application can be mutually combined.
It should be noted that the term " in description and claims of this specification and above-mentioned accompanying drawing
One ", " second " etc. is for distinguishing similar object, without for describing specific order or first
Order afterwards.
Embodiment 1
A kind of request scheduling method is provided in the present embodiment, and Fig. 1 is according to embodiments of the present invention asks
Dispatching method flow chart is sought, as shown in figure 1, the flow comprises the following steps:
Step S102, the uniform resource position mark URL request that receiving terminal is sent;
Step S104, judge whether the content that the URL request is asked is Hot Contents;
Step S106, it is in the case that the content that the URL request is asked is Hot Contents in judged result,
The URL request is dispatched to the in addition to the first caching server once allocated before the URL request
Two caching servers.
Alternatively, in the present embodiment, the application scenarios of above-mentioned request scheduling method include but is not limited to:
More cache server cache while cooperating on one node.Under the application scenarios,
The URL (Uniform Resource Locator, referred to as URL) that receiving terminal is sent
Request;Judge whether the content that the URL request is asked is Hot Contents;It is the URL in judged result
In the case that the asked content of request is Hot Contents, the URL request is dispatched to except the URL request
The second caching server outside preceding the first caching server being once allocated.That is, in this reality
Apply in example, when judging content that URL request is asked for Hot Contents, distinguished by way of poll
Same URL request is dispatched to corresponding caching server, and then it is same URL to solve in correlation technique
The problem of asking the load for distributing the separate unit cache equipment caused by same cache equipment higher, reaches
The effect of the same cache machine utilizations of mitigation is arrived.
With reference to specific example, to the present embodiment explanation for example.
Present example provides a kind of request scheduling method and system, by this method and system, on the one hand may be used
To reduce the bearing load of separate unit cache equipment, it on the other hand can improve flow of services and user perceives
Experience.Wherein, caching server is by taking cache equipment as an example.Come with reference to the system architecture of this example
Specifically describe above-mentioned request scheduling method.
1) local load balancing equipment slb
Slb is responsible for the load balancing of each cache in each node, ensures the operating efficiency between node.
The information between collector node and surrounding environment is also wanted simultaneously, keeps the communication between global load balancing,
Realize the load balancing of whole system.Slb needs to record the URL that user asks every time, it is also necessary to which record is every
When individual URL is asked first, cache equipment ip that slb equipment is arrived by cid-hash algorithms selections
Location, the content for being easy to implement follow-up cache equipment rooms replicate.
2) Hot Contents replication module
The Hot Contents replication module, interface is replicated by the content appointed, slb will need what is replicated
Information is wrapped by shifting the cache equipment for needing to carry out content duplication under json bodies onto in json message bodies
Cache equipment ip containing the URL for needing Hot Contents, the buffered Hot Contents.Cache is received
After the message, then http request can be initiated to buffered equipment according to the information of band in json bodies,
Realize the duplication of Hot Contents.
3) poll services module
The poll services module, it is to judge whether request that subsequent user comes is in focus in TOP N
Hold, if it is, slb then can be according to new polling algorithm, being dispatched to buffered respectively by request should
On two or more cache of content, the offer of Hot Contents on two or multiple devices in turn is allowed to take
Business, effectively meets high concurrent scene;If it is not, then adjusted also according to cid-hash algorithms before
Spend in a certain equipment and service.
In one alternatively embodiment, before the URL request is dispatched to except the URL request once
It is further comprising the steps of before the second caching server outside the first allocated caching server:
Step S11, the URL request that record terminal is sent every time, and distributed for the URL request first
Caching server network protocol IP address.
The URL request sent every time by recording terminal in step S11, and be the URL request first
The network protocol IP address of the caching server of distribution, it can be used for load-balancing device slb and make requests on
Scheduling, same URL request can be dispatched to correspondingly by same URL request respectively by way of poll
Caching server, and then it is that same URL request is distributed same cache and set to solve in correlation technique
The problem of load of standby caused separate unit cache equipment is higher, reaches the same cache equipment of mitigation and bear
The effect of lotus.
In one alternatively embodiment, by the URL request be dispatched to except before the URL request once by
The second caching server outside first caching server of distribution comprises the following steps:
Step S21, judge whether the URL request is to send first;
Step S22, it is in the case that the URL request is not to send first, according to the URL in judged result
The IP address of caching server once allocated, the URL request is dispatched to except the IP before request
Caching server outside location.
By step S21 to step S22, when judging content that URL request is asked for Hot Contents,
Same URL request is dispatched to corresponding caching server respectively by way of poll, further solved
It is the separate unit cache that same URL request is distributed caused by same cache equipment is set in correlation technique
The problem of standby load is higher, reach the effect for mitigating same cache machine utilizations.
In one alternatively embodiment, judging content that the URL request is asked for Hot Contents
When, the method for above-mentioned request scheduling also includes:
Step S31, content Copy Info is issued to the caching server for needing to carry out Hot Contents duplication,
So that the caching server for receiving the content Copy Info carries out Hot Contents duplication;
It should be noted that the above Copy Info includes at least one of:The Hot Contents
URL, cached the Hot Contents caching server network protocol IP address.
By above-mentioned steps S31, answered to needing the caching server for carrying out Hot Contents duplication to issue content
Information processed, so that the caching server for receiving the content Copy Info carries out Hot Contents duplication so that
After URL request is dispatched into other caching servers, corresponding resource is able to access that, improves use
Family Experience Degree.
In one alternatively embodiment, it is not for the content that the URL request is asked in judged result
In the case of Hot Contents, Hash calculation is carried out according to the unique ID of the URL request institute request content,
To be defined as the caching server of the URL request service.
With reference to specific example, the present embodiment is illustrated.
In following examples, caching server illustrates by taking cache as an example.
As shown in Fig. 2 when reaching detection cycle, slb can count each url in detection cycle
Total request number of times.If detection cycle is 1 hour, in 1 hour, user initiated 4 requests,
Url is respectively:
url1:http:The@.exe of //down10.zol.com.cn/zoldown/WeChat_C1012@428288,
url2:http://flv5.bn.netease.com/videolib3/1511/10/WhBmc5859/HD/WhB
Mc5859.flv,
url3:http://112.84.104.39/flv.bn.netease.com/videolib3/1511/10
/YHbSI 1252/HD/YHbSI1252.flvWsiphost=local,
url4:http://61.160.204.74/youku/65729AB85433D8271DA3B626C4/0300010
B0455B930A185DB092B13A2E90C0903-79E7-CB79-5351-D0C2783BAF7B.flv&s
Tart=0, the total degree of request is respectively 6 times, 5 times, 4 times, 3 times, passes through cid-hash when asking first
Algorithms selection to cache equipment be respectively cache1, cache1, cache2, cache3, if I
The focus that needs to count be TOP3, then url1, url2, url3 are Hot Contents.
As shown in figure 3, the Hot Contents url1 equipment that hash chooses first is cache1, if needed
The number to be replicated is 2 parts, then url 1 content also needs to have storage on cache2 or cache3,
Assuming that it is cache2 that we, which randomly choose the equipment for needing to replicate,;Similar url2 content random selection
The equipment for needing to replicate is cache3, and url3 content random selection is to the equipment for needing to replicate
cache3。
As shown in the flow (1) in Fig. 3, content url 1, the cache1 being replicated that slb replicates needs
Equipment ip is encapsulated in json bodies, then replicates that the cache2 for needing to replicate is pushed under interface by content
Equipment, ip2 therein are the ip of cache2 equipment, and port is the port of management port:6620, cache2
After receiving the content duplication message that slb is sent, after parsing the information brought in message body, to cache1
The drop-down of the contents of url 1 is carried out, then will be local in content caching, as shown in the flow (2) in Fig. 3,
Wherein ip1 is the ip of cache1 equipment, and port is serve port port:6610, this completes
Content replicates, and url 1 content has caching on equipment cache1 and cache2;Similarly cache3
Url2 download and caching is completed to cache1 device requests, cache3 asks to complete url3 to cache2
The download of content and caching, so all Hot Contents are completed the duplication of equipment room, in all focuses
Appearance all has storage in two cache equipment.
As shown in figure 4, after the duplication of completion Hot Contents, slb can preserve the heat in all detection cycles
Point content corresponding to md5 values, subsequent user request arrival slb after, slb determine whether first for
Hot Contents, if the content can be found in Hot Contents table, it is Hot Contents to illustrate the content,
Need to use new polling algorithm, dispatch of taking turns will be asked to two equipment for having cached the content, it is non-
Hot Contents then service in single device.When content such as url1 is asked again, if slb is adjusted first
Cache1 is spent, then during request next time, url1 request will be dispatched to cache2 and be taken by slb
Business, similar Hot Contents url2 and url3 are also serviced in turn in two equipment rooms, rather than hotspot request
Url4 is then serviced on cache3 all the time, the poll of Hot Contents equipment, reduces single device load,
It is effectively guaranteed the scene of high concurrent.
To sum up, request scheduling method provided by the present invention is held required for separate unit cache equipment is greatly reduced
Under the load received, particularly the high concurrent scene caused by festivals or holidays or popular movie and television play, it is substantially improved
The bearing capacity of cache servers.And cache share loads and the service ability accelerated are improved,
Promote, have very important significance for cache network storages and the commercial of acceleration.
Through the above description of the embodiments, those skilled in the art can be understood that basis
The method of above-described embodiment can add the mode of required general hardware platform to realize, certainly by software
Can be by hardware, but the former is more preferably embodiment in many cases.Based on such understanding, sheet
The part that the technical scheme of invention substantially contributes to prior art in other words can be with software product
Form embodies, the computer software product be stored in a storage medium (such as ROM/RAM, magnetic disc,
CD) in, including some instructions to cause a station terminal equipment (can be mobile phone, computer, clothes
It is engaged in device, or network equipment etc.) perform method described in each embodiment of the present invention.
Embodiment 2
A kind of request dispatching device is additionally provided in the present embodiment, and the device is used to realize above-described embodiment
And preferred embodiment, carried out repeating no more for explanation.As used below, term " mould
Block " can realize the combination of the software and/or hardware of predetermined function.Although the dress described by following examples
Put and preferably realized with software, but hardware, or the realization of the combination of software and hardware is also possible
And be contemplated.
Fig. 5 is the structured flowchart of request dispatching device according to embodiments of the present invention, as shown in figure 5, should
Device includes:
1) receiving module 52, the uniform resource position mark URL request sent for receiving terminal;
2) judge module 54, for judging whether the content that the URL request is asked is Hot Contents;
3) scheduler module 56, for being that the content that the URL request is asked is in focus in judged result
In the case of appearance, the URL request is dispatched to except the first caching once allocated before the URL request takes
The second caching server outside business device.
Alternatively, in the present embodiment, the application scenarios of above-mentioned request dispatching device include but is not limited to:
More cache server cache while cooperating on one node.Under the application scenarios,
The URL (Uniform Resource Locator, referred to as URL) that receiving terminal is sent
Request;Judge whether the content that the URL request is asked is Hot Contents;It is the URL in judged result
In the case that the asked content of request is Hot Contents, the URL request is dispatched to except the URL request
The second caching server outside preceding the first caching server being once allocated.That is, in this reality
Apply in example, when judging content that URL request is asked for Hot Contents, distinguished by way of poll
Same URL request is dispatched to corresponding caching server, and then it is same URL to solve in correlation technique
The problem of asking the load for distributing the separate unit cache equipment caused by same cache equipment higher, reaches
To the effect for mitigating same cache machine utilizations.
Fig. 6 is the structured flowchart (one) of request dispatching device according to embodiments of the present invention, as shown in fig. 6,
The device in addition to all modules in including Fig. 5, in addition to:
1) logging modle 62, for once being divided before the URL request is dispatched to except the URL request
Before the second caching server outside the first caching server matched somebody with somebody, the URL that terminal is sent every time is recorded
Request, and the network protocol IP address of the caching server for the distribution of the URL request first.
By the optional embodiment, the URL request that terminal is sent every time is recorded, and is the URL first
The network protocol IP address of the caching server of distribution is asked, can be used for load-balancing device slb progress
Same URL request, can be dispatched to by request scheduling by same URL request respectively by way of poll
Corresponding caching server, and then it is that same URL request distributes same cache to solve in correlation technique
The problem of load of separate unit cache equipment caused by equipment is higher, reaches and mitigate same cache equipment
The effect of load.
Fig. 7 is the structured flowchart (two) of request dispatching device according to embodiments of the present invention, as shown in fig. 7,
Scheduler module 56 includes:
1) judging unit 72, for judging whether the URL request is to send first;
2) scheduling unit 74, in the case of not being transmission first in judged result for the URL request,
According to the IP address of caching server once allocated before the URL request, the URL request is dispatched to
Caching server in addition to the IP address.
By the optional embodiment, when judging content that URL request is asked for Hot Contents, lead to
Same URL request is dispatched to corresponding caching server by the mode of overpolling respectively, is further solved
It is that same URL request distributes separate unit cache equipment caused by same cache equipment in correlation technique
Load it is higher the problem of, reach the effect for mitigating same cache machine utilizations.
Fig. 8 is the structured flowchart (three) of request dispatching device according to embodiments of the present invention, as shown in figure 8,
The device also includes in addition to module in including Fig. 5:
1) processing module 82, for when judging content that the URL request is asked for Hot Contents,
Content Copy Info is issued to the caching server for needing to carry out Hot Contents duplication, so that it is interior to receive this
The caching server for holding Copy Info carries out Hot Contents duplication;
Wherein, the content Copy Info includes at least one of:The URL of the Hot Contents, cache
The network protocol IP address of the caching server of the Hot Contents.
By this optional embodiment, content is issued to the caching server for needing to carry out Hot Contents duplication
Copy Info, so that the caching server for receiving the content Copy Info carries out Hot Contents duplication, make
Obtain after URL request is dispatched into other caching servers, be able to access that corresponding resource, improve
User experience.
Fig. 9 is the structured flowchart (four) of request dispatching device according to embodiments of the present invention, as shown in figure 9,
The device also includes:
1) computing module 92, for for the content that the URL request is asked not being focus in judged result
In the case of content, Hash calculation is carried out according to the unique ID of the URL request institute request content, with
It is defined as the caching server of the URL request service.
Embodiment 3
Embodiments of the invention additionally provide a kind of storage medium.Alternatively, in the present embodiment, it is above-mentioned
Storage medium can be configured to the program code that storage is used to perform following steps:
S1, the uniform resource position mark URL request that receiving terminal is sent;
S2, judge whether the content that the URL request is asked is Hot Contents;
S3, it is in the case that the content that the URL request is asked is Hot Contents, by this in judged result
URL request is dispatched in addition to the first caching server once allocated before the URL request second slow
Deposit server.
Alternatively, in the present embodiment, above-mentioned storage medium can include but is not limited to:It is USB flash disk, read-only
Memory (ROM, Read-Only Memory), random access memory (RAM, Random Access
Memory), mobile hard disk, magnetic disc or CD etc. are various can be with the medium of store program codes.
Alternatively, in the present embodiment, processor performs according to the program code stored in storage medium
Above-mentioned steps S1, S2 and S3.
Alternatively, the specific example in the present embodiment may be referred in above-described embodiment and optional embodiment
Described example, the present embodiment will not be repeated here.
Obviously, those skilled in the art should be understood that above-mentioned each module of the invention or each step can
To be realized with general computing device, they can be concentrated on single computing device, or distribution
On the network that multiple computing devices are formed, alternatively, the journey that they can be can perform with computing device
Sequence code is realized, it is thus possible to be stored in storage device by computing device to perform, and
And in some cases, can to perform shown or described step different from order herein, or
They are fabricated to each integrated circuit modules respectively, or the multiple modules or step in them are made
Realized into single integrated circuit module.So, the present invention is not restricted to any specific hardware and software
With reference to.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for this
For the technical staff in field, the present invention can have various modifications and variations.It is all the present invention spirit and
Within principle, any modification, equivalent substitution and improvements made etc., the protection of the present invention should be included in
Within the scope of.
Claims (7)
- A kind of 1. Cache equipment content copying methods, it is characterised in that including:Focus URL request is determined according to URL request number;By the content of the focus URL request from the first cache equipment that focus URL is asked first Copy in the 2nd cache equipment.
- 2. the method as described in claim 1, it is characterised in that determined according to URL request number Focus URL request, including:URL request each time is recorded in a detection cycle;When reaching detection cycle, request number of times of each URL request in detection cycle is counted;Sorted according to request number of times and determine focus URL request.
- 3. the method as described in claim 1, it is characterised in that by the focus URL request Appearance is copied in the 2nd cache equipment from the first cache equipment that focus URL is asked first, is wrapped Include,The IP address for the first cache equipment that record focus URL request selects when asking first;Encapsulating the focus URL and focus URL the first cache device IP asked first is Message body;To be pushed under the message body described needs to replicate the second of the focus URL request content Cache equipment;It is described to need the 2nd cache equipment for replicating the focus URL request content to parse the message Body;The 2nd cache equipment for needing to replicate the focus URL request content is to the focus The first cache equipment that URL is asked first pulls down the content of the focus URL request, by the heat The content caching of point URL request is in local.
- 4. method as claimed in claim 3, it is characterised in that the heat will be encapsulated using json forms The first cache device IP that the point URL and focus URL is asked first is message body.
- 5. the method as described in claim any one of 1-4, it is characterised in that receiving focus URL please After asking, the focus URL request is dispatched to except once allocated before the focus URL request The 2nd cache equipment outside first cache equipment.
- 6. the method as described in claim any one of 1-5, it is characterised in that the 2nd cache Equipment is one or more.
- 7. a kind of SiteServer LBS, including load-balancing device and cache equipment, it is characterised in that The cache is the equipment for realizing claim 1~6 methods described.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610511615.XA CN107517243A (en) | 2016-06-16 | 2016-06-16 | Request scheduling method and device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610511615.XA CN107517243A (en) | 2016-06-16 | 2016-06-16 | Request scheduling method and device |
CN201610439369.1A CN107517241A (en) | 2016-06-16 | 2016-06-16 | Request scheduling method and device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610439369.1A Division CN107517241A (en) | 2016-06-16 | 2016-06-16 | Request scheduling method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107517243A true CN107517243A (en) | 2017-12-26 |
Family
ID=60721398
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610439369.1A Pending CN107517241A (en) | 2016-06-16 | 2016-06-16 | Request scheduling method and device |
CN201610511615.XA Withdrawn CN107517243A (en) | 2016-06-16 | 2016-06-16 | Request scheduling method and device |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610439369.1A Pending CN107517241A (en) | 2016-06-16 | 2016-06-16 | Request scheduling method and device |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN107517241A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111385327A (en) * | 2018-12-28 | 2020-07-07 | 阿里巴巴集团控股有限公司 | Data processing method and system |
CN113472901A (en) * | 2021-09-02 | 2021-10-01 | 深圳市信润富联数字科技有限公司 | Load balancing method, device, equipment, storage medium and program product |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107682281B (en) * | 2017-09-15 | 2020-04-17 | 通鼎互联信息股份有限公司 | SDN switch and application management method thereof |
CN111131402B (en) * | 2018-03-22 | 2022-06-03 | 贵州白山云科技股份有限公司 | Method, device, equipment and medium for configuring shared cache server group |
CN109151512A (en) * | 2018-09-12 | 2019-01-04 | 中国联合网络通信集团有限公司 | The method and device of content is obtained in CDN network |
CN109819039B (en) * | 2019-01-31 | 2022-04-19 | 网宿科技股份有限公司 | File acquisition method, file storage method, server and storage medium |
CN112019451B (en) * | 2019-05-29 | 2023-11-21 | 中国移动通信集团安徽有限公司 | Bandwidth allocation method, debugging network element, local cache server and computing device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101668046A (en) * | 2009-10-13 | 2010-03-10 | 成都市华为赛门铁克科技有限公司 | Resource caching method, resource obtaining method, device and system thereof |
CN103281367A (en) * | 2013-05-22 | 2013-09-04 | 北京蓝汛通信技术有限责任公司 | Load balance method and device |
US20140115120A1 (en) * | 2011-12-14 | 2014-04-24 | Huawei Technologies Co., Ltd. | Content Delivery Network CDN Routing Method, Device, and System |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104202362B (en) * | 2014-08-14 | 2017-11-03 | 上海帝联信息科技股份有限公司 | SiteServer LBS and its content distribution method and device, load equalizer |
-
2016
- 2016-06-16 CN CN201610439369.1A patent/CN107517241A/en active Pending
- 2016-06-16 CN CN201610511615.XA patent/CN107517243A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101668046A (en) * | 2009-10-13 | 2010-03-10 | 成都市华为赛门铁克科技有限公司 | Resource caching method, resource obtaining method, device and system thereof |
US20140115120A1 (en) * | 2011-12-14 | 2014-04-24 | Huawei Technologies Co., Ltd. | Content Delivery Network CDN Routing Method, Device, and System |
CN103281367A (en) * | 2013-05-22 | 2013-09-04 | 北京蓝汛通信技术有限责任公司 | Load balance method and device |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111385327A (en) * | 2018-12-28 | 2020-07-07 | 阿里巴巴集团控股有限公司 | Data processing method and system |
CN111385327B (en) * | 2018-12-28 | 2022-06-14 | 阿里巴巴集团控股有限公司 | Data processing method and system |
CN113472901A (en) * | 2021-09-02 | 2021-10-01 | 深圳市信润富联数字科技有限公司 | Load balancing method, device, equipment, storage medium and program product |
Also Published As
Publication number | Publication date |
---|---|
CN107517241A (en) | 2017-12-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107517243A (en) | Request scheduling method and device | |
CN109951880B (en) | Communication processing method and device, computer readable medium and electronic equipment | |
CN103457993B (en) | Local cache device and the method that content caching service is provided | |
CN110430274A (en) | A kind of document down loading method and system based on cloud storage | |
CN106657379A (en) | Implementation method and system for NGINX server load balancing | |
CN104486402B (en) | A kind of method based on large-scale website combined equalization | |
US20080301219A1 (en) | System and/or Method for Client-Driven Server Load Distribution | |
CN102739717B (en) | Method for down loading, download agent server and network system | |
CN104852934A (en) | Method for realizing flow distribution based on front-end scheduling, device and system thereof | |
CN102196060A (en) | Method and system for selecting source station by Cache server | |
TW201822013A (en) | Server load balancing method, apparatus, and server device | |
JP6485980B2 (en) | Network address resolution | |
CN101815033A (en) | Method, device and system for load balancing | |
CN107835437B (en) | Dispatching method based on more cache servers and device | |
CN109660578B (en) | CDN back-to-source processing method, device and system | |
CN105847853A (en) | Video content distribution method and device | |
CN103237031B (en) | Time source side method and device in order in content distributing network | |
CN101997822A (en) | Streaming media content delivery method, system and equipment | |
CN106789956B (en) | A kind of P2P order method and system based on HLS | |
CN106161573A (en) | Server buffer processing method, Apparatus and system | |
US20110131288A1 (en) | Load-Balancing In Replication Engine of Directory Server | |
CN106657183A (en) | Caching acceleration method and apparatus | |
CN109962961A (en) | A kind of reorientation method and system of content distribution network CDN service node | |
CN106304154B (en) | A kind of data transmission method and PDCP entity of PDCP entity | |
CN106790697A (en) | Safe Realization of Storing and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20171226 |