[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107689969A - A kind of determination method and device of cache policy - Google Patents

A kind of determination method and device of cache policy Download PDF

Info

Publication number
CN107689969A
CN107689969A CN201610628394.4A CN201610628394A CN107689969A CN 107689969 A CN107689969 A CN 107689969A CN 201610628394 A CN201610628394 A CN 201610628394A CN 107689969 A CN107689969 A CN 107689969A
Authority
CN
China
Prior art keywords
api
request message
cache
marks
time period
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610628394.4A
Other languages
Chinese (zh)
Other versions
CN107689969B (en
Inventor
梁标
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201610628394.4A priority Critical patent/CN107689969B/en
Priority to PCT/CN2017/075295 priority patent/WO2018023966A1/en
Publication of CN107689969A publication Critical patent/CN107689969A/en
Application granted granted Critical
Publication of CN107689969B publication Critical patent/CN107689969B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5009Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF]
    • H04L41/5012Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF] determining service availability, e.g. which services are available at a certain point in time
    • H04L41/5016Determining service level performance parameters or violations of service level contracts, e.g. violations of agreed response time or mean time between failures [MTBF] determining service availability, e.g. which services are available at a certain point in time based on statistics of service availability, e.g. in percentage or over a given time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Debugging And Monitoring (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the present invention provides a kind of determination method and device of cache policy, is related to communication technical field so that API management clusters can automatically generate cache policy, and this method includes:API manages cluster and carries out statistical analysis to the multiple API request message received and multiple API response messages, and determines cache-time section according to statistic analysis result;And API management clusters determine cache policy, wherein, the period corresponding with the first time period in cycle after cache-time section cycle where first time period, in the first time period, the content all same of API response messages corresponding to the API request message for the API of all carryings the first marks that API management clusters receive.

Description

A kind of determination method and device of cache policy
Technical field
The present invention relates to the determination method and device of communication technical field, more particularly to a kind of cache policy.
Background technology
With the fast development of application (Application, App), the service provider of increasing company or website to App developer provides open applications Program Interfaces (Application Programming Interface, API) so that App developer can develop App rapidly based on opening API.
App is during use, and by calling corresponding API, collection pocket transmission API request message is managed to API, by API management clusters send the API request message to respective server after treatment.With being skyrocketed through for user's request, API manages cluster as API gateway, needs to handle billions of API request message daily, average individual API requests up to ten thousand per second Message, this performance for managing API cluster cause greatly to test.
At present, API manages cluster and provides caching function to API developer, and API developer is in exploitation API Orchestration During, increase cache policy in Orchestration, to indicate received in API management cluster cache specified time sections first The individual content for carrying API responses corresponding to the API request message for specifying API marks, when receiving carrying in the specified time section During the API request message of identical API marks, directly responded using the cache contents of caching.So as to avoid API management collection The API request message of group's API marks identical to all carryings for receiving is handled, without being transmitted to server, so as to API processing delay is shortened, while reduces the load pressure of server.
However, in the above-mentioned methods, cache policy is all write by API developer.If cache policy write it is wrong, APP can be directly resulted in receive the response of mistake and influence business.
The content of the invention
Embodiments of the invention provide a kind of determination method and device of cache policy so that API management clusters can be certainly Dynamic generation cache policy, improve the accuracy of cache policy.
To reach above-mentioned purpose, embodiments of the invention adopt the following technical scheme that:
In a first aspect, the embodiment of the present invention provides a kind of determination method of cache policy, this method includes:API management collection Group carries out statistical analysis to the multiple API request message received and multiple API response messages, the plurality of API request message with The plurality of API response messages correspond;API management clusters determine cache-time section according to statistic analysis result, the caching Period is the period corresponding with the first time period in cycle where first time period after the cycle, this first when Between in section, API response messages corresponding to the API request message for the API of all carryings the first marks that API management clusters receive Content all same, the first time period be in the first time period API manage receive first of cluster carry this One API mark API request message at the time of, to the first time period in the API management cluster receive last carrying Corresponding to the API request message of first API marks at the time of API response messages;API management clusters determine cache policy, The cache policy be used for indicate the API management cluster according to first received in the cache-time section carry this first The content of API response messages, all carryings to being received in the cache-time section corresponding to the API request message of API marks The API request message of first API marks is responded.
By the determination method of cache policy provided in an embodiment of the present invention, API management clusters can be more to what is received API response messages corresponding to individual API request message and each API request message carry out statistical analysis, and the very first time be present Section, in the first time period, corresponding to the API request message for the API of all carryings the first marks that API management clusters receive During the content all same of API response messages, by determining cache-time to determine cache policy, cache policy is determined to realize Automation, without manual compiling, so as to improve the accuracy of cache policy.
Optionally, when the API manages caching of the cluster within least one cycle after the cycle where the first time period Between section, the accuracy of the cache policy is verified, at least one cycle be with the first time period where the cycle it is adjacent Cycle;In the case where the accuracy is more than or equal to predetermined threshold value, API management clusters are using the cache policy to taking API request message with the first API marks is responded.
By the optional mode, the cache policy that API management cluster is capable of pair determination is verified automatically, further really The accuracy of cache policy is protected, and the caching plan for being reached predetermined threshold value using accuracy automatically is responded to API request message, So as to ensure that API manages the accuracy that cluster is responded to API request message.
Optionally, should be in the case where the accuracy be more than or equal to predetermined threshold value, it is slow using this that the API manages cluster Deposit strategy to carry the first API mark API request message respond after, this method also includes:API management collection is mined massively With the period 1, periodically verify whether the cache policy is effective in the cache-time section;When the cache policy is invalid, API management clusters are updated to the cache policy.
By the optional mode, API management cluster can automatic alignment verified in the cache policy used, and slow It is obstructed out-of-date to deposit policy validation, the cache policy is updated in time, to realize to the automatic of the cache policy that is used Verification process, ensure the correctness of cache policy.
Optionally, the API manages cluster and the multiple API request message received and multiple API response messages is united Before meter analysis, this method also includes:API management clusters determine the service quality carried in the plurality of API request message Whether (Quality of Service, QoS) class information meets predetermined level;API management clusters are multiple to what is received API request message and multiple API response messages carry out statistical analysis, including:If the QoS carried in the plurality of API request message Class information meets the predetermined level, and the API manages cluster then to the plurality of API request message received and the plurality of API Response message carries out statistical analysis.
Wherein, class information can wrap App QoS grades, or the QoS grades of App user, and predetermined level can be gold Board App grades, by the optional mode, the priority cache of API request message grade can be realized.
Second aspect, the embodiment of the present invention provide a kind of API management cluster, including:Analytic statistics unit, for receiving The multiple API request message arrived and multiple API response messages carry out statistical analysis, the plurality of API request message and the plurality of API Response message corresponds;Determining unit, when the statistic analysis result for being determined according to the analytic statistics unit determines caching Between section, the cache-time section is the time corresponding with the first time period in cycle where first time period after the cycle Section, in the first time period, API response messages corresponding to the API request message of the API of all carryings the first marks received Content all same, the first time period is receive first API for carrying the first API marks in the first time period At the time of request message, to the first time period in receive last carry the first API mark API request message pair At the time of the API response messages answered;The determining unit, it is additionally operable to determine cache policy, the cache policy is used to indicate that the API is managed Reason cluster is according to corresponding to first received in the cache-time section API request message for carrying the first API marks The content of API response messages, the API request message identified to the API of all carryings the first received in the cache-time section Responded.
Optionally, the API, which manages cluster, also includes authentication unit and response unit, the authentication unit, for this first Cache-time section after cycle where period at least one cycle, is verified to the accuracy of the cache policy, should At least one cycle is the cycle adjacent with the cycle where the first time period;The response unit, for being more than in the accuracy Or equal to predetermined threshold value in the case of, using the cache policy to carry the first API mark API request message ring Should.
Optionally, the API, which manages cluster, also includes updating block, the authentication unit, is additionally operable to use in the response unit After the cache policy responds to the API request message for carrying the first API marks, using the period 1, periodically exist Verify whether the cache policy is effective in the cache-time section;The updating block, it is additionally operable to when the authentication unit determines the caching During strategies ineffective, the cache policy is updated.
Optionally, the determining unit, it is additionally operable in the analytic statistics unit to multiple API request message for receiving and more Before individual API response messages carry out statistical analysis, the service quality QoS grade letter carried in the plurality of API request message is determined Whether breath meets predetermined level;The analytic statistics unit, if being determined specifically for the determining unit in the plurality of API request message The QoS class informations of carrying meet the predetermined level, then the plurality of API request message received and the plurality of API are responded Message carries out statistical analysis.
The technique effect of API management cluster provided in an embodiment of the present invention may refer to above-mentioned first aspect or first aspect Each optional mode technique effect, here is omitted.
Optionally, in above-mentioned first aspect and second aspect, in first time period, all carryings first for receiving API identifies the content all same of API response messages corresponding to the API request message with the first parameter, and the first time period is should First received in first time period carry the first API mark and first parameter API request message at the time of, arrive Received in the first time period corresponding to the API request message that last carries the first API marks and first parameter At the time of API response messages;The cache policy is used to indicating API management clusters according to receiving in the cache-time section The content of API response messages corresponding to the first API request message for carrying the first API marks and first parameter, to this The API of all carryings the first marks and the API request message of first parameter received in cache-time section is responded.
By the optional mode, when the content of API response messages is related to the first parameter in API request message, Cache policy can be automatically determined, without manual compiling, so as to improve the accuracy of cache policy.
The third aspect, the embodiment of the present invention provide a kind of API management cluster, including processor, memory, system bus and Communication interface.
The memory is used to store computer executed instructions, and the processor is total by the system with the memory Line is connected, and when API management cluster operations, the computer of memory storage described in the computing device, which performs, to be referred to Order, by make API management cluster perform above-mentioned first aspect and first aspect it is any it is optional in a manner of described in caching plan Determination method slightly.
Fourth aspect, the embodiment of the present invention provide a kind of computer-readable recording medium, including computer executed instructions, collect when API is managed Described in the computing device of group during computer executed instructions, the API management cluster performs such as above-mentioned first aspect and first The determination method of cache policy described in any optional mode of aspect.
The technique effect of API management clusters described in the above-mentioned third aspect and fourth aspect may refer to above-mentioned first aspect Or the technique effect of each optional mode of first aspect, here is omitted.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to institute in embodiment The accompanying drawing needed to use is briefly described, it should be apparent that, drawings in the following description are only some implementations of the present invention Example.
Fig. 1 is API application networks schematic diagram of a scenario provided in an embodiment of the present invention;
Fig. 2 is the structural representation that a kind of API provided in an embodiment of the present invention manages cluster;
Fig. 3 is a kind of flow chart of the determination method of cache policy provided in an embodiment of the present invention;
Fig. 4 is the structural representation that a kind of API provided in an embodiment of the present invention manages cluster;
Fig. 5 is the structural representation that another API provided in an embodiment of the present invention manages cluster;
Fig. 6 is the structural representation that another API provided in an embodiment of the present invention manages cluster.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly retouched State, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.
Fig. 1 is a kind of API application networks schematic diagram of a scenario provided in an embodiment of the present invention, can include Website server, Point of sale (Point Of Sate, POS), mobile terminal, API management clusters, background server (backend server) and collection Into system, integrated system can include ESB (Enterprise Service Bus, ESB), service-oriented frame Structure (Service-Oriented Architecture, SOA), database (database) and App servers (App Server, AS) etc..Wherein, API manages cluster and is used as API gateway, by internet and user terminal (including Website server, POS, mobile terminal) it is connected with service end (including background server and integrated system).API management clusters send user terminal API request message is forwarded to service end, such as sends to AS, and the API request message is handled and responded by AS, and should Response message corresponding to API request message sends to API and manages cluster, and managing cluster by the API forwards the API response messages To user terminal.
As shown in Fig. 2 manage cluster, including management server for a kind of API provided in an embodiment of the present invention (management server), router, information processing (message process) module and caching analysis module.
Wherein, management server is responsible for creating API, creates App, subscribes to API and defines API Orchestration etc..
Router is responsible for receiving the API request message from user terminal, and gives message processing module processing, and is responsible for connecing The response of message processing module is received, and back to user terminal.
Message processing module, according to above-mentioned API Orchestration, API request message is handled (such as form conversion, Caching etc.), and the background server or AS specified are transmitted to, and according to above-mentioned API Orchestration, to API response messages (such as form conversion, caching etc.) is handled, and returns to router.
It is that API manages increased module in cluster to cache analysis module, by being connect to message processing module or router The API request message received and its corresponding API response messages carry out statistical analysis and determine cache policy, and automatic pair of determination Cache policy is verified, and the cache policy after checking is saved in API Orchestration, so that message processing module makes With realization determines the automatic flow of API cache policies, without manually being write and being safeguarded, so as to improve cache policy Accuracy.
It should be noted that in embodiments of the present invention, at the management server, router, information in API management clusters It can be respectively independent physical machine to manage module and caching analysis module, process that can also be independently or independent line Journey, operate in same physical machine.
Specifically, Fig. 2 is based on, as shown in figure 3, the embodiment of the present invention provides a kind of determination method of cache policy, the party Method can include:
S101, API manage cluster and carry out statistical to the multiple API request message received and multiple API response messages Analysis.
The plurality of API request message corresponds with the plurality of API response messages, and each API request message is corresponding API response messages in carry identical AP mark.
In embodiments of the present invention, API manages cluster and can respond the API request message received and corresponding API Message is input in caching analysis module, is counted according to API marks, to determine that the API for carrying each API marks is responded Whether message can be cached, and obtain statistic analysis result in the case of it is determined that can cache.
In one example, API, which manages cluster, can analyse whether a period be present, and API is managed within the period Content all phases of API response messages corresponding to the API request message for the identical API marks of all carryings that reason cluster receives Together, whether the content of the API response messages of API marks is carried with determination can be cached.
By taking the first API marks as an example, API manages cluster and responded by the API request message to receiving and corresponding API After message carries out statistical analysis, if in first time period, corresponding to the API request message of the first API of all carryings marks The content all same of API response messages, then carrying the content of the API response messages of the first API marks can be cached, API Management cluster can count the content identical first time period for the API response messages for carrying the first API marks, to be united Count analysis result.
Wherein, first time period is that API manages receive first of cluster and carries the first API in the first time period At the time of the API request message of mark, to the first time period in API management cluster receive last carry this first Corresponding to the API request message of API marks at the time of API response messages.
For example, it is assumed that the xxxx xx months 01, API management clusters have received 10000 and carry the first API marks API response messages corresponding to API request message and each API request message.If API manages cluster by statistical analysis, really API request message pair of the fixed 100th API request message for carrying the first API marks to the 1000th carrying the first API mark The content for the API response messages answered all is identical, and API management clusters can then determine to carry the API request of the first API marks The content of API response messages can be cached corresponding to message.
So, API manage cluster can according to receive the 100th carry the first API identify API request message when At the time of carving, and receive API response messages corresponding to the 1000th API request message for carrying the first API marks, the is determined One period.For example, API manages cluster the 01 day 9 xxxx xx months:When 00, receive the 100th and carry the first API marks The API request message of knowledge, the 01 day 10 xxxx xx months:When 00, receiving the 1000th API for carrying the first API marks please API response messages corresponding to message are sought, i.e. first time period is on the xxxx xx months 01 9:00~10:00.So as to API management The statistic analysis result that cluster obtains is the 01 day 9 xxxx xx months:00~10:The API of 00, the first API of all carryings mark The content all same of API response messages corresponding to request message.
It should be noted that in the examples described above, API manages cluster in first time period, the carrying first received The parameter carried in all API request message of API marks may be identical, it is also possible to entirely different, it is also possible to possess phase Same partial parameters, but other specification is different.
In one example, API, which manages cluster, can analyse whether a period be present, and API is managed within the period The API responses corresponding to the identical API marks of all carryings and the API request message of same section parameter that reason cluster receives disappear The content of breath is all identical, to determine whether the content for carrying API marks and the API response messages of the partial parameters can be carried out Caching.Wherein, API request message API response messages corresponding with its carry identical API marks and partial parameters.
By taking the first API marks and the first parameter as an example, if API manages cluster in first time period, what is received is all The content all same of API response messages corresponding to the API request message of the first API marks and the first parameter is carried, then carries the One API is identified and the content of the API response messages of the first parameter can be cached, and API management clusters can count carrying First API is identified and the content identical first time period of the API response messages of the first parameter, to obtain statistic analysis result.
In embodiments of the present invention, first time period is that API manages first that cluster receives in the first time period Carry the first API mark and the first parameter API request message at the time of, to the first time period in API management cluster connect At the time of receiving API response messages corresponding to the API request message that last carries the first API marks and the first parameter.
Wherein, the first parameter in API request message can be the URL of the API request message Parameter (parameters) in (Uniform Resoure Locator, URL), accordingly, first in API response messages Parameter is the parameter in the URL of the API response messages;The first parameter in API request message can also be the API request message Data volume (body) in parameter, accordingly, the first parameter in the API response messages can also be the API response messages Data volume in parameter.
In one example, when multiple periods in a cycle being present, each period in the plurality of period In, the content of API response messages corresponding to the request message for the API of all carryings the first marks that API management clusters receive is all When identical, the plurality of period can be respectively a first time period, can also form a first time period.
For example, the 9 of the xxxx xx months 01:00~10:00,11:00~12:00, and 15:00~16:00 this three In each period in the individual period, the request message for the API of all carryings the first marks that API management clusters receive is corresponding API response messages content it is all identical, or, API management cluster receive the API of all carryings the first mark and first ginseng The content of API response messages is all identical corresponding to several request messages, then can be by 9:00~10:00,11:00~12:00, with And 15:00~16:The period set of 00 these three periods composition is defined as first time period.Can also be by 9:00~ 10:00,11:00~12:00, and 15:00~16:00 is identified as a first time period.
In one example, API manage cluster to the API of carrying the first that the receives multiple API request message identified and Before API response messages corresponding to each API request message carry out statistical analysis, API management cluster can also be to receiving The QoS class informations carried in API request message are verified, it is determined that the QoS class informations received whether meet it is default etc. Level.
For example, the class information carried in API request message can include App QoS grades, or the QoS of App user Grade, accordingly, predetermined level can be gold medal App grades, or Gold Subscriber grade etc..
For example, the QoS grades of the App user carried in the API request message that API management clusters receive are Gold Subscriber Grade, then the API request message and its corresponding API response messages can be input in caching analysis module and carry out statistical Analysis.
I.e. in embodiments of the present invention, if the grade letter carried in multiple API request message that API management clusters receive Breath meets predetermined level, and API management clusters then carry out statistical analysis to the plurality of API request message and multiple API response messages. So as to realize that high-grade App or user API request preferentially carry out caching analysis.
S102, API management cluster determine cache-time section according to statistic analysis result.
Wherein, it is corresponding with the first time period in the cycle after cache-time section cycle where first time period Period.
For example, it is assumed that 1 day is 1 cycle, first time period is on the xxxx xx months 01 9:00~10:00, then 9 in every day after the xxxx xx months 01:00~10:00 is the period corresponding with first time period.At this In inventive embodiments, API manages 9 in every day after cluster can determine the xxxx xx months 01 according to the actual requirements: 00~10:00 is cache-time section, can also determine 9 in every day in one month after the xxxx xx months 01:00 ~10:00 is cache-time section, on the other hand, the embodiment of the present invention is not restricted.
In one example, when multiple periods in a cycle being present, each period in the plurality of period In, the content of API response messages corresponding to the request message for the API of all carryings the first marks that API management clusters receive is all When identical, if multiple periods can be respectively a first time period, API management clusters can then be directed to each period Determine a cache policy;If a time collection of the plurality of period composition is combined into first time period, first time period can be with A period being made up of multiple periods gathers, and API management clusters can then determine that one is delayed according to the plurality of period Deposit strategy.
S103, API management cluster determine cache policy, and the cache policy is used to indicate API management clusters according to caching First received in period carries the content of API response messages corresponding to the API request message of the first API marks, right The API request message of the API of all carryings the first marks received in the cache-time section is responded.
Wherein, cache policy can include the corresponding relation that the first API is identified and cached the period, and for indicating API management clusters perform the instruction of associative operation.
In one example, when the API of all carryings the first marks that in first time period, API management clusters receive With the API request message of the first parameter corresponding to API response message all sames, then the cache policy include the cache-time, should Corresponding relation between first parameter and the first API marks, and for indicating that API management clusters perform the finger of associative operation Order.So, the cache policy is used to indicate that API management clusters carry first according to first received in cache-time section API identifies the content of API response messages corresponding to the API request message with the first parameter, to being received in the cache-time section The API of all carryings the first marks and the API request message of the first parameter responded.
What deserves to be explained is the determination for the cache policy that API management cluster can be provided by the embodiments of the present invention Method, cache policy is determined, without manual compiling, so as to improve the accuracy of cache policy.
Further, after above-mentioned S103, this method also includes:
S104, API manage cache-time section of the cluster after the cycle where first time period at least one cycle, right The accuracy of the cache policy carries out checking and verified.
Wherein, at least one cycle is the cycle adjacent with the cycle where the first time period.API management clusters determine The cache-time section in each cycle at least one cycle, verifies whether the cache policy is effective.
Specifically, in a cache-time section, what API management clusters determined to receive in the cache-time section owns The whether identical of the content of API response messages corresponding to the API request message of the first API marks, if identical, the caching carried Tactful effective, if differing, the cache policy is invalid.
API manages all cache-time sections of the cluster within least one cycle and the cache policy is verified, really The effective ratio of the fixed cache policy, the effective ratio of the cache policy is the accuracy of the cache policy.
It should be noted that at least one number of cycles can be configured according to the actual requirements, on the other hand, the present invention is real Example is applied not to be restricted.
S105, the accuracy be more than or equal to predetermined threshold value in the case of, API management cluster use the cache policy The API request message for carrying the first API marks is responded.
Preferably, predetermined threshold value can be 100%.When API management clusters determine that the accuracy of the cache policy is 100% When, the cache policy can be saved in policy library by API management clusters, to be marked using the cache policy to carrying the first API The API request message of knowledge is responded.That is the cache policy is saved in the policy library in API Orchestration by API management clusters In after, the API management cluster message processing module in each cache-time section, by receive first carry first The content of API response messages is cached corresponding to the API request message of API marks, and is received in the cache-time section When other carry the API request message of the first API marks, directly other are carried using the content of the API response messages of caching The API request message of first API marks is responded.Without the API request message of other carryings the first API marks is forwarded To AS, the API request message of each carrying the first API marks is responded by AS, so as to reduce AS load pressure.
Optionally, when the accuracy is less than predetermined threshold value, API management clusters can delete the cache policy, and use Method in above-mentioned S101-S103 redefines new cache policy, and is further verified, is more than until obtaining accuracy Or the cache policy equal to predetermined threshold value.
It should be noted that predetermined threshold value can be configured according to actual requirement, on the other hand, the embodiment of the present invention does not limit System.
Further, in embodiments of the present invention, API is managed cluster and the first API request is disappeared using the cache policy During breath is responded, the period 1 can also be used, periodically the cache policy is verified in cache-time section whether Effectively, when cache policy checking is effective, then it is continuing with the cache policy and the API request for carrying the first API marks is disappeared Breath is responded, and when the cache policy is invalid, API management clusters can be updated to the cache policy.
Exemplary, with the Duan Weiyi period 1 of 10 cache-times, checking should periodically in cache-time section Whether cache policy is effective.Specifically, API manages cluster in the 1st to the 9th cache-time section, using the cache policy to taking API request message with the first API marks is responded, in the 10th cache-time section, by the 10th cache-time section The API request message of the API of all carryings the first marks received is forwarded to AS, carried out by AS according to usual processing mode Reason, and receive the corresponding A PI response messages of API request message with each first API marks of AS transmissions, then verify the The content of response message corresponding to the API request message of the API of all carryings the first marks received in 10 cache-time sections It is whether identical.If identical, the cache policy is effective, and API management clusters can continue using the cache policy to carrying first The API request message of API marks is responded.If differing, the cache policy is invalid, and API management cluster can be slow to this Strategy is deposited to be updated.
It is understood that when cache policy checking is invalid, API management clusters can delete the cache policy, and The method in above-mentioned S101-S105 is used to redefine cache policy of the new and accuracy for predetermined threshold value.
By scheme provided in an embodiment of the present invention, API management cluster can the API request message that receives of automatic butt and Its corresponding API response message carries out statistical analysis, determines cache policy, and the cache policy of automatic pair of determination carry out checking and Use, realize the automatic flow for determining API cache policies, without manually being write and being safeguarded, so as to improve caching The accuracy of strategy.
It is above-mentioned that mainly scheme provided in an embodiment of the present invention is described from the angle of interaction between each network element.Can With understanding, each network element, such as API management clusters etc., in order to realize above-mentioned function, it comprises perform each function phase The hardware configuration and/or software module answered.Those skilled in the art should be readily appreciated that, with reference to reality disclosed herein The unit and algorithm steps of each example of example description are applied, the present invention can be with the combining form of hardware or hardware and computer software To realize.Some functions is performed in a manner of hardware or computer software driving hardware actually, depending on technical scheme Application-specific and design constraint.Professional and technical personnel can to each specific application come using distinct methods to realize The function of description, but this realization is it is not considered that beyond the scope of this invention.
The embodiment of the present invention can manage API the division that cluster etc. carries out functional module, example according to above method example Such as, each function can be corresponded to and divide each functional module, two or more functions can also be integrated at one Manage in module.Above-mentioned integrated module can both be realized in the form of hardware, can also use the form of software function module Realize.It should be noted that the division in the embodiment of the present invention to module is schematical, only a kind of logic function is drawn Point, there can be other dividing mode when actually realizing.
In the case where dividing each functional module using corresponding each function, Fig. 4 shows involved in above-described embodiment And API management cluster a kind of possible structural representation, API management cluster include:Analytic statistics unit 10, determining unit 11st, authentication unit 12, response unit 13 and updating block 14.Analytic statistics unit 10 is used to support API management clusters to perform Process S101 in Fig. 3;Determining unit 11 is used to support API management clusters to perform the S102-S103 during Fig. 3;Checking is single Member 12 is used to support API management clusters to perform the S104 during Fig. 3;Response unit 13 is used to support API management clusters to perform S105 during Fig. 3, updating block 14 are used to support API management clusters to perform renewal operation.Wherein, above method embodiment All related contents for each step being related to can quote the function description of corresponding function module, will not be repeated here.
In the case of using integrated unit, the management server in cluster, router, caching are managed as API and is analyzed Module and message processing module are in a manner of independent process or separate threads, and when operating in same physical machine, Fig. 5 is shown The alternatively possible structural representation of involved API management cluster in above-described embodiment.API management clusters include:Place Manage module 100 and communication module 101.The action that processing module 100 is used to manage API cluster is controlled management, for example, place Reason module 100 is used to support API management clusters to perform S101-S105 during Fig. 3;And/or for techniques described herein Other processes.Communication module 101 is used for the communication for supporting API management clusters and other network entities, such as with being shown in Fig. 1 Functional module or network entity between communication.API management clusters can also include memory module 102, for storing API pipes Manage the program code and data of cluster.
Wherein, processing module 100 can be processor or controller, such as can be CPU, and general processor is digital to believe Number processor (Digital Signal Processor, DSP), application specific integrated circuit (Application-Specific Integrated Circuit, ASIC), field programmable gate array (Field Programmable Gate Array, FPGA) Either other PLDs, transistor logic, hardware component or its any combination.It can realize or perform Various exemplary logic blocks with reference to described by the disclosure of invention, module and circuit.The processor can also be The combination of computing function is realized, such as is combined comprising one or more microprocessors, combination of DSP and microprocessor etc..It is logical Believe that module 101 can be communication interface.Memory module 102 can be memory.
When processing module 100 is processor, communication module 101 is communication interface, when memory module 102 is memory, this API management cluster involved by inventive embodiments can be that the API shown in Fig. 6 manages cluster.
As shown in fig.6, API management clusters include:Processor 110, communication interface 111, memory 112 and bus 113.Wherein, communication interface 111, processor 110 and memory 112 are connected with each other by bus 113;Bus 113 can be Peripheral Component Interconnect standard (Peripheral Component Interconnect, PCI) bus or EISA (Extended Industry Standard Architecture, EISA) bus etc..It is total that the bus can be divided into address Line, data/address bus, controlling bus etc..For ease of representing, only represented in Fig. 6 with a thick line, it is not intended that only one total Line or a type of bus.
The management server in cluster, router, caching analysis module and message processing module are managed as API is respectively During independent physical machine, the API management cluster involved by the embodiment of the present invention can be that the API shown in Fig. 2 manages cluster.
The step of method or algorithm with reference to described by the disclosure of invention, can be realized in a manner of hardware, also may be used By be by computing device software instruction in a manner of realize.Software instruction can be made up of corresponding software module, software mould Block can be stored on random access memory (Random Access Memory, RAM), flash memory, read-only storage (Read Only Memory, ROM), Erasable Programmable Read Only Memory EPROM (Erasable Programmable ROM, EPROM), electricity can EPROM (Electrically EPROM, EEPROM), register, hard disk, mobile hard disk, read-only optical disc (CD-ROM) or in the storage medium of any other form well known in the art.A kind of exemplary storage medium is coupled to place Device is managed, so as to enable a processor to from the read information, and information can be write to the storage medium.Certainly, store Medium can also be the part of processor.Processor and storage medium can be located in ASIC.In addition, the ASIC can position In core network interface equipment.Certainly, processor and storage medium can also be present in core network interface as discrete assembly and set In standby.
Those skilled in the art are it will be appreciated that in said one or multiple examples, work(described in the invention It is able to can be realized with hardware, software, firmware or their any combination.When implemented in software, can be by these functions It is stored in computer-readable medium or is transmitted as one or more instructions on computer-readable medium or code. Computer-readable medium includes computer-readable storage medium and communication media, and wherein communication media includes being easy to from a place to another Any medium of one place transmission computer program.It is any that storage medium can be that universal or special computer can access Usable medium.
Above-described embodiment, the purpose of the present invention, technical scheme and beneficial effect are carried out further Describe in detail, should be understood that the embodiment that the foregoing is only the present invention, be not intended to limit the present invention Protection domain, all any modification, equivalent substitution and improvements on the basis of technical scheme, done etc., all should It is included within protection scope of the present invention.

Claims (10)

1. a kind of determination method of cache policy, it is characterised in that methods described includes:
API manages cluster and carries out statistical analysis to the multiple API request message received and multiple API response messages, described more Individual API request message corresponds with the multiple API response messages;
The API management cluster determines cache-time section according to statistic analysis result, and the cache-time section is first time period The period corresponding with the first time period in cycle after the cycle of place, in the first time period, the API The content of API response messages is homogeneous corresponding to the API request message for the API of all carryings the first marks that management cluster receives Together, the first time period is that the API manages first carrying described first that cluster receives in the first time period At the time of the API request message of API marks, to the first time period in API management cluster receive last and take Corresponding to API request message with the first API marks at the time of API response messages;
The API management cluster determines cache policy, and the cache policy is used to indicate that the API manages cluster according to described First received in cache-time section carries API response messages corresponding to the API request message of the first API marks Content, the API request message of all carryings the first API marks to being received in the cache-time section respond.
2. according to the method for claim 1, it is characterised in that
In the first time period, the API of all carryings the first marks and the first parameter that the API management cluster receives The content all same of API response messages corresponding to API request message, the first time period are described in the first time period Receive first API request message for carrying the first API marks and first parameter of API management cluster when Carve, to the first time period in API management cluster receive last and carry the first API marks and described the Corresponding to the API request message of one parameter at the time of API response messages;
The cache policy is used to indicate that the API management cluster is taken according to first received in the cache-time section The content of API response messages corresponding to API request message with the first API marks and first parameter, to described slow The API request message for depositing all carryings received in the period the first API marks and first parameter is responded.
3. method according to claim 1 or 2, it is characterised in that methods described also includes:
Cache-time section of the API management cluster after the cycle where the first time period at least one cycle is right The accuracy of the cache policy is verified that at least one cycle is adjacent with the cycle where the first time period Cycle;
In the case where the accuracy is more than or equal to predetermined threshold value, the API management cluster uses the cache policy The API request message for carrying the first API marks is responded.
4. according to the method for claim 3, it is characterised in that described to be more than or equal to predetermined threshold value in the accuracy In the case of, the API management clusters are rung using the cache policy to the API request message for carrying the first API marks After answering, methods described also includes:
The API management cluster uses the period 1, periodically verifies that the cache policy is in the cache-time section It is no effective;
When the cache policy is invalid, the API management cluster is updated to the cache policy.
5. according to the method described in claim any one of 1-4, it is characterised in that the API management cluster is more to what is received Before individual API request message and multiple API response messages carry out statistical analysis, methods described also includes:
Whether the service quality QoS class information that the API management cluster determines to carry in the multiple API request message meets Predetermined level;
The API management clusters carry out statistical analysis, bag to the multiple API request message received and multiple API response messages Include:
If the QoS class informations carried in the multiple API request message meet the predetermined level, the API manages cluster Statistical analysis then is carried out to the multiple API request message received and the multiple API response messages.
6. a kind of API manages cluster, it is characterised in that including:
Analytic statistics unit, statistical analysis is carried out for multiple API request message to receiving and multiple API response messages, The multiple API request message corresponds with the multiple API response messages;
Determining unit, the statistic analysis result for being determined according to the analytic statistics unit determines cache-time section, described slow The period is deposited as the period corresponding with the first time period in the cycle after the first time period place cycle, described In first time period, the content of API response messages corresponding to the API request message of the API of all carryings the first marks received All same, the first time period are the API that first received in the first time period carries the first API marks At the time of request message, to the first time period in receive last API request for carrying the first API mark and disappear Corresponding to breath at the time of API response messages;
The determining unit, is additionally operable to determine cache policy, the cache policy be used to indicating API management cluster according to First received in the cache-time section carries API responses corresponding to the API request message that the first API is identified and disappeared The content of breath, the API request message of all carryings the first API marks to being received in the cache-time section are rung Should.
7. API according to claim 6 manages cluster, it is characterised in that
In the first time period, the API of all carryings the first marks received are corresponding with the API request message of the first parameter API response messages content all same, the first time period is receive first carrying in the first time period At the time of the API request message of first API mark and first parameter, to the first time period in receive it is last Corresponding to the one API request message for carrying the first API marks and first parameter at the time of API response messages;
The cache policy is used to indicate that the API management cluster is taken according to first received in the cache-time section The content of API response messages corresponding to API request message with the first API marks and first parameter, to described slow The API request message for depositing all carryings received in the period the first API marks and first parameter is responded.
8. the API management clusters according to claim 6 or 7, it is characterised in that the API management cluster also includes checking Unit and response unit,
The authentication unit, for the cache-time section at least one cycle after the cycle where the first time period, The accuracy of the cache policy is verified, at least one cycle is adjacent with the cycle where the first time period Cycle;
The response unit, in the case of being more than or equal to predetermined threshold value in the accuracy, using the caching plan Slightly the API request message for carrying the first API marks is responded.
9. API according to claim 8 manages cluster, it is characterised in that it is single that the API management cluster also includes renewal Member,
The authentication unit, being additionally operable to please to carrying the API of the first API marks using the cache policy in the response unit Ask after message responded, using the period 1, periodically verify that the cache policy is in the cache-time section It is no effective;
The updating block, it is additionally operable to, when the authentication unit determines that the cache policy is invalid, enter the cache policy Row renewal.
10. the API management clusters according to claim any one of 6-9, it is characterised in that
The determining unit, it is additionally operable to ring the multiple API request message received and multiple API in the analytic statistics unit Answer before message carries out statistical analysis, determine whether is the service quality QoS class information that is carried in the multiple API request message Meet predetermined level;
The analytic statistics unit, if the QoS for determining to carry in the multiple API request message specifically for the determining unit Class information meets the predetermined level, then to the multiple API request message received and the multiple API response messages Carry out statistical analysis.
CN201610628394.4A 2016-08-03 2016-08-03 Method and device for determining cache strategy Active CN107689969B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610628394.4A CN107689969B (en) 2016-08-03 2016-08-03 Method and device for determining cache strategy
PCT/CN2017/075295 WO2018023966A1 (en) 2016-08-03 2017-03-01 Method and device for determining caching strategy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610628394.4A CN107689969B (en) 2016-08-03 2016-08-03 Method and device for determining cache strategy

Publications (2)

Publication Number Publication Date
CN107689969A true CN107689969A (en) 2018-02-13
CN107689969B CN107689969B (en) 2020-01-17

Family

ID=61072446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610628394.4A Active CN107689969B (en) 2016-08-03 2016-08-03 Method and device for determining cache strategy

Country Status (2)

Country Link
CN (1) CN107689969B (en)
WO (1) WO2018023966A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785637A (en) * 2022-03-15 2022-07-22 浪潮云信息技术股份公司 Implementation method and system for caching response data by API gateway

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113138943B (en) * 2020-01-19 2023-11-03 北京京东振世信息技术有限公司 Method and device for processing request
CN112069386B (en) * 2020-09-07 2023-09-05 北京奇艺世纪科技有限公司 Request processing method, device, system, terminal and server

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101567840A (en) * 2008-04-22 2009-10-28 深圳市同洲电子股份有限公司 Streaming media data cache control method and device
CN102426541A (en) * 2010-10-19 2012-04-25 微软公司 Availability management for reference data services
US20130212603A1 (en) * 2012-02-10 2013-08-15 Twilio, Inc. System and method for managing concurrent events
CN103455443A (en) * 2013-09-04 2013-12-18 华为技术有限公司 Buffer management method and device
CN104836800A (en) * 2015-04-17 2015-08-12 华为技术有限公司 Service quality control method, equipment and service quality control system
CN105684387A (en) * 2013-10-04 2016-06-15 阿卡麦科技公司 Systems and methods for caching content with notification-based invalidation

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102710776B (en) * 2012-06-05 2014-08-20 天津兆民云计算科技有限公司 Method for preventing repeatedly requesting API server in short time
CN105279034B (en) * 2015-10-26 2018-11-30 北京皮尔布莱尼软件有限公司 Consistency cache control system and method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101567840A (en) * 2008-04-22 2009-10-28 深圳市同洲电子股份有限公司 Streaming media data cache control method and device
CN102426541A (en) * 2010-10-19 2012-04-25 微软公司 Availability management for reference data services
US20130212603A1 (en) * 2012-02-10 2013-08-15 Twilio, Inc. System and method for managing concurrent events
CN103455443A (en) * 2013-09-04 2013-12-18 华为技术有限公司 Buffer management method and device
CN105684387A (en) * 2013-10-04 2016-06-15 阿卡麦科技公司 Systems and methods for caching content with notification-based invalidation
CN104836800A (en) * 2015-04-17 2015-08-12 华为技术有限公司 Service quality control method, equipment and service quality control system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114785637A (en) * 2022-03-15 2022-07-22 浪潮云信息技术股份公司 Implementation method and system for caching response data by API gateway
CN114785637B (en) * 2022-03-15 2024-08-09 浪潮云信息技术股份公司 Implementation method and system for API gateway cache response data

Also Published As

Publication number Publication date
WO2018023966A1 (en) 2018-02-08
CN107689969B (en) 2020-01-17

Similar Documents

Publication Publication Date Title
JP6095106B2 (en) System and method for adaptive selection of bank cards for payment
CN108241799B (en) Cross-system access method, system, device and computer readable storage medium
CN108737325A (en) A kind of multi-tenant data partition method, apparatus and system
CN111629051B (en) Performance optimization method and device for industrial internet identification analysis system
CN109787908A (en) Server current-limiting method, system, computer equipment and storage medium
CN110032451A (en) Distributed multilingual message realization method, device and server
CN109993572A (en) Retention ratio statistical method, device, equipment and storage medium
CN107870989A (en) webpage generating method and terminal device
CN108563789A (en) Data cleaning method based on Spark frames and device
CN103200338A (en) Telephone traffic statistic method
CN107689969A (en) A kind of determination method and device of cache policy
CN110381150B (en) Data processing method and device on block chain, electronic equipment and storage medium
CN111984733A (en) Data transmission method and device based on block chain and storage medium
CN103297419A (en) Method and system for fusing off-line data and on-line data
CN108900482A (en) Execution method, server management system and the storage medium of script
CN109561152B (en) Data access request response method, device, terminal and storage medium
CN101517540B (en) Method and system for resource-based event typing in a rules system
CN111753162A (en) Data crawling method, device, server and storage medium
CN106803798A (en) Virtual switch QoS configuration management systems and Cloud Server under a kind of cloud platform
CN110417919A (en) A kind of flow abduction method and device
US20160020970A1 (en) Router and information-collection method thereof
CN109165145A (en) A kind of the service condition statistical method and device of application program
Wang et al. On the evolution of Linux kernels: a complex network perspective
CN112001595B (en) Resource splitting method and device
CN106998276A (en) Data processing, storage, querying method and data handling system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210428

Address after: Unit 3401, unit a, building 6, Shenye Zhongcheng, No. 8089, Hongli West Road, Donghai community, Xiangmihu street, Futian District, Shenzhen, Guangdong 518040

Patentee after: Honor Device Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.