CN115883653B - Request processing method, request processing device, electronic equipment and storage medium - Google Patents
Request processing method, request processing device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN115883653B CN115883653B CN202211492758.2A CN202211492758A CN115883653B CN 115883653 B CN115883653 B CN 115883653B CN 202211492758 A CN202211492758 A CN 202211492758A CN 115883653 B CN115883653 B CN 115883653B
- Authority
- CN
- China
- Prior art keywords
- node
- request
- request processing
- processing network
- stateful application
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 130
- 238000003860 storage Methods 0.000 title claims abstract description 34
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 230000004044 response Effects 0.000 claims abstract description 43
- 238000000034 method Methods 0.000 claims abstract description 34
- 230000007246 mechanism Effects 0.000 claims description 33
- 238000013468 resource allocation Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 14
- 230000000977 initiatory effect Effects 0.000 claims description 8
- 238000010276 construction Methods 0.000 claims description 7
- 238000005304 joining Methods 0.000 claims description 3
- 238000013473 artificial intelligence Methods 0.000 abstract description 3
- 238000007726 management method Methods 0.000 description 24
- 230000006870 function Effects 0.000 description 17
- 238000004891 communication Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 239000008186 active pharmaceutical agent Substances 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000005192 partition Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000151 deposition Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000003032 molecular docking Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The disclosure provides a request processing method, a request processing device, electronic equipment and a storage medium, and relates to the technical fields of artificial intelligence such as distributed storage, internet of things, cloud computing and cloud protogenesis. The method comprises the following steps: receiving a to-be-processed request initiated by a user; responding to the current node as a master node in a request processing network, determining a target node for storing historical state data of a to-be-processed request, wherein the master node and the slave nodes forming the request processing network are served by different copies of stateful applications, the historical state data of different requests are stored in different nodes in a scattered manner, and the copies serving as the nodes run in a memory in a form of carrying the historical state data; and forwarding the request to be processed to the target node, and controlling the target node to generate response information corresponding to the request to be processed according to the stored historical state information. The method can conveniently improve the overall request processing capacity and performance by adding more stateful application copies or increasing the amount of resources allocated for each copy.
Description
Technical Field
The disclosure relates to the technical field of information processing, in particular to the technical field of artificial intelligence such as distributed storage, internet of things, cloud computing, cloud primordia and the like, and particularly relates to a request processing method, a request processing device, electronic equipment, a computer readable storage medium and a computer program product.
Background
In cloud proto, the state of an application refers to the situation where it is at a particular time, i.e., it may be in a state of existence. The resolution of whether an application is stateful or stateless is generally dependent on the recording of the interaction state and the manner in which the information is stored.
Stateless, stateless applications may generally be understood as being isolated. There is no way to know about its historical transactions. Each request or each transaction handled by an application is started from scratch. In general, stateless applications can only provide one service and use a content delivery network or web page to handle these short-term requests.
Stateful applications are typically transactions that have periodicity, dependencies. Such as an internet banking or email, which are performed in the context of previous transactions, historical transactions are highly likely to affect current transactions. One common phenomenon is that stateful applications use the same application server each time a user request is processed (i.e., the same application copy is processed, but not other copies). When a transaction is interrupted, the stateful application will record its context locally for the next resume.
Scalability has been one of the important design indicators of computing processing power of an application system, and high scalability represents elasticity, which represents a linear increase in processing power of the system that can be achieved by adding hardware or adding servers during application iteration.
How to solve the problem of difficult extension of stateful applications in cloud protogenesis is a problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The embodiment of the disclosure provides a request processing method, a request processing device, electronic equipment, a computer readable storage medium and a computer program product.
In a first aspect, an embodiment of the present disclosure provides a method for processing a request, including: receiving a to-be-processed request initiated by a user; determining a target node for storing historical state data of a request to be processed in response to the current node being a master node in the request processing network; the method comprises the steps that a master node and a slave node forming a request processing network are served as different copies of stateful application, history state data of different requests are stored in different nodes in a scattered mode, the copies serving as the nodes run in a memory in a mode of carrying the history state data, and actual storage nodes of the different history state data are recorded in the master node; and forwarding the request to be processed to the target node, and controlling the target node to generate response information corresponding to the request to be processed according to the stored historical state information.
In a second aspect, an embodiment of the present disclosure proposes a request processing apparatus, including: the device comprises a pending request receiving unit configured to receive a pending request initiated by a user; a target node determining unit configured to determine a target node storing history state data of a request to be processed in response to a current node being a master node in a request processing network; the method comprises the steps that a master node and a slave node forming a request processing network are served as different copies of stateful application, history state data of different requests are stored in different nodes in a scattered mode, the copies serving as the nodes run in a memory in a mode of carrying the history state data, and actual storage nodes of the different history state data are recorded in the master node; the processing unit of the pending request is configured to forward the pending request to the target node, and the control target node generates response information corresponding to the pending request according to the stored historical state information.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to implement the request processing method as described in the first aspect when executed.
In a fourth aspect, embodiments of the present disclosure provide a non-transitory computer-readable storage medium storing computer instructions for enabling a computer to implement a request processing method as described in the first aspect when executed.
In a fifth aspect, the presently disclosed embodiments provide a computer program product comprising a computer program which, when executed by a processor, is capable of carrying out the steps of the request processing method as described in the first aspect.
According to the request processing scheme provided by the disclosure, for a stateful application which needs to store historical state data for responding to a received request to be processed, a plurality of copies of the stateful application are created in advance, the copies are used as nodes to construct a grid-form request processing network, a master-slave mechanism and a distributed storage mechanism are used as auxiliary materials to form a framework for processing the request of the stateful application, the incoming request to be processed can be forwarded to a target node for storing the historical state data of the request to be processed through a master node, so that the response to the request to be processed is completed by the target node, and the framework can also improve the overall request processing capacity and performance by adding more copies of the stateful application or adding the amount of resources allocated to each copy while having basic request processing.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
Other features, objects and advantages of the present disclosure will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings:
FIG. 1 is an exemplary system architecture in which the present disclosure may be applied;
FIG. 2 is a flowchart of a request processing method according to an embodiment of the present disclosure;
FIG. 3 is a flow chart of a method of constructing a request processing network provided by an embodiment of the present disclosure;
FIG. 4 is a flow chart of two different capacity expansion methods provided in embodiments of the present disclosure;
FIG. 5a is a schematic diagram of different applications included in a device management platform according to an embodiment of the present disclosure;
FIG. 5b is a schematic diagram illustrating adjustment of data storage locations in a copy-expansion method according to an embodiment of the present disclosure;
FIG. 6 is a block diagram of a request processing apparatus according to an embodiment of the present disclosure;
Fig. 7 is a schematic structural diagram of an electronic device adapted to perform a request processing method according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness. It should be noted that, without conflict, the embodiments of the present disclosure and features of the embodiments may be combined with each other.
In the technical scheme of the disclosure, the related processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the user accord with the regulations of related laws and regulations, and the public order colloquial is not violated.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of the request processing methods, apparatus, electronic devices, and computer readable storage media of the present disclosure may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a request processing network 105. The network 104 is a medium used to provide a communication link between the terminal devices 101, 102, 103 and the request processing network 105, and the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, etc.; the request processing network 105 is derived from a plurality of request processing node components, each of which is served by a copy of the stateful application, and each copy has stored therein historical state data for responding to a portion of the pending requests.
The user may interact with any node in the request processing network 105 through the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various applications for enabling information communication between the terminal devices 101, 102, 103 and each node constituting the request processing network 105, such as a request transmission class application, a request processing class application, an in-network messaging application, and the like, may be installed on each node.
The terminal devices 101, 102, 103 and the nodes constituting the request processing network 105 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices with display screens, including but not limited to smartphones, tablets, laptop and desktop computers, etc.; when the terminal devices 101, 102, 103 are software, they may be installed in the above-listed electronic devices, which may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not particularly limited herein. When each node constituting the request processing network 105 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server; when each node is software or a software running product, the node may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not specifically limited herein.
Each node constituting the request processing network 105 can provide various services through various applications built in, and, for example, a request processing class application that can provide processing of incoming pending requests, each node constituting the request processing network 105 can achieve the following effects when running the request processing class application: firstly, receiving a pending request transmitted by a user through terminal equipment 101, 102 and 103 through a network 104; then, when the current node is a master node in the request processing network 105, determining a target node storing the history state data of the request to be processed, wherein the master node records actual storage nodes of different history state data; and forwarding the pending request to the target node, and controlling the target node to generate response information corresponding to the pending request according to the stored historical state information.
The request processing method provided by the subsequent embodiments of the present disclosure is generally performed by the request processing network 105 storing the history status data for responding to the request to be processed, and is also equivalent to being performed by the node constituting the request processing network 105, and accordingly, the request processing apparatus is generally provided in each node constituting the request processing network 105.
It should be understood that the number of terminal devices, networks and nodes in fig. 1 is merely illustrative. There may be any number of terminal devices, networks and nodes, as desired for implementation.
Referring to fig. 2, fig. 2 is a flowchart of a request processing method according to an embodiment of the disclosure, wherein a flowchart 200 includes the following steps:
step 201: receiving a to-be-processed request initiated by a user;
This step aims at receiving a pending request initiated by a user (e.g. by a terminal device 101, 102 or 103 shown in fig. 1) by an executing body of the request processing method (e.g. a node constituting the request processing network 105 shown in fig. 1).
The pending request is a request (e.g., a deposit inquiry request) initiated by a user to a certain stateful application (e.g., a bank application), that is, the stateful application is expected to make a correct response to the request according to the pre-stored historical state data of the pending request, that is, the user initiates the request to obtain the response.
Wherein the concept of stateful applications is to distinguish between stateless applications. Stateless in stateless applications generally refers to a request that, when any Web client makes a request, the request itself contains all the information (authentication information, etc.) required by the responding client to respond to the request; the stateful application generally means that the request of the Web requesting end must be submitted to the server that stores the relevant state information (such as the historical session information), otherwise the request may not be understood, which means that the server end cannot freely schedule the user request in this mode.
In short, the difference is whether the state information is held by the requestor or the responder, whether the state information is held by the requestor or the responder is stateless, or whether the state information is held by the responder. The stateless application does not care who the response party is, the information among the response parties does not need to be synchronized, the response service can be deleted at any time, other people cannot be influenced, the fault tolerance is high, the load balancing failure of the distributed service cannot lose data, the memory consumption is avoided, and the distributed service can be used by being directly deployed on line; the stateful application needs to synchronize data in time, and may have the problems of data synchronization incompletely, data loss, data saving in memory resource consumption, and the like.
Step 202: determining a target node for storing historical state data of a request to be processed in response to the current node being a master node in the request processing network;
on the basis of step 201, this step aims at determining, by the master node, a target node storing historical state data with the request to be processed in the case where the execution subject is the master node in the request processing network. The master node and the slave nodes forming the request processing network are served as different copies of stateful applications, history state data of different requests are stored in different nodes in a scattered mode, and meanwhile, the copies serving as the nodes are operated in a memory in a mode of carrying the history state data instead of being stored in a hard disk in a persistent mode, so that response speed is improved.
In the case that the history state data of different requests are stored in different nodes in a scattered manner, in order that the master node can determine which node stores the history state data of the request to be processed, the master node also records the actual storage node of the different history state data in the master node in advance, the information can be represented as a corresponding relation table between the history state information of the different requests and the different nodes, so that the target node can be determined by a table look-up mode, and the target node can be the master node itself (namely, the master node also stores part of the history state data of the requests like the slave node), or can be some slave node which is not the master node.
Further, under the master-slave mechanism, the stateful application copy serving as the master node can store the same amount of historical state data as the stateful application copy serving as the slave node, so that the data amount stored in each node is unified and convenient to manage; less historical state data may also be stored than in a stateful application copy acting as a slave node, so that the master node stores portions of the data (e.g., the correspondence table described above) that do not need to be stored by the slave node. Further, in order to facilitate the determination of whether the node itself is a master node or a slave node, the determination may be performed by adding a master node flag or a slave node flag, or a method that may be used to determine whether the node itself is a master node by the previous day may be used, which is not particularly limited herein.
Step 203: and forwarding the request to be processed to the target node, and controlling the target node to generate response information corresponding to the request to be processed according to the stored historical state information.
Based on step 202, this step aims at forwarding the pending request to the target node by the execution body, and controlling the target node to generate response information corresponding to the pending request according to the stored historical state information of the pending request.
Different from the branches of step 202 and step 203, if the execution body is the slave node in the request processing network, since the slave node does not record the relevant information of which node the history state data of different requests are stored in, the request to be processed needs to be forwarded to the master node, so that the master node processes the received request to be processed through the flows of step 202 and step 203.
After the target node generates the response information of the pending request, the target node returns the response information to the user initiating the pending request, so that the user receives the response information to complete the processing of the request.
According to the request processing method provided by the embodiment of the disclosure, for the stateful application which needs to store the historical state data for responding to the received request to be processed, a plurality of copies of the stateful application are created in advance, the copies are used as nodes to construct a grid-form request processing network, a master-slave mechanism and a distributed storage mechanism are used as auxiliary materials to form a framework for processing the request of the stateful application, the incoming request to be processed can be forwarded to a target node for storing the historical state data of the request to be processed through the master node, so that the response to the request to be processed is completed by the target node, and the framework can also improve the overall request processing capacity and performance by adding more stateful application copies or adding the amount of resources allocated to each copy while having basic request processing.
To enhance the understanding of how the request processing network is built, the present embodiment also shows a flowchart of a method of building a request processing network from the perspective of any stateful application copy acting as a node, with fig. 3, where the flowchart 300 includes the steps of:
step 301: acquiring the allocated unique identity information;
This step is intended to be performed by any stateful application replica acting as a node, and upon completion of the replica creation, obtain the assigned unique identity information to distinguish it from other stateful application replicas.
Specifically, the unique identity information can be a unique identity number, a unique name, a unique copy number and the like, is not particularly limited herein, and can be selected according to actual application scenes.
Step 302: acquiring an allocated unique domain name;
Based on step 301, this step aims to obtain, by the executing entity, the unique domain name that is continuously allocated, where the unique domain name is generated based on the unique identity information that is previously allocated, that is, the unique domain name may be reflected in a manner of containing the unique identity information, or other manners of generating the domain name may be adopted, so long as the generated domain name can be guaranteed to have uniqueness, for example, an information processing algorithm that generates a character string that is not repeated may be adopted.
Step 303: discovering other domain names having similarity with the unique domain name, and determining other stateful application copies corresponding to the other domain names;
based on step 302, this step aims at discovering, by the executing entity, other domain names that are similar to the unique domain name based on the similarity of the domain names, to further determine other stateful application copies corresponding to the other domain names, i.e., the other domain names have similarity in domain name content to the unique domain name.
Specifically, when the unique domain name includes complete unique identity information, the other domain name may be a domain name that is different from the unique domain name only in content of identity information recorded in an identity information recording field that is included in the unique domain name (i.e., a field dedicated to recording identity information that forms the domain name where the unique domain name is located), that is, the other domain name is consistent with other information included in the current unique domain name, and only the identity information portions that characterize copies of different stateful applications are different.
Step 304: determining the node types of the current stateful application copy and other stateful application copies which are respectively selected as based on a preset master-slave node selection mechanism;
Based on step 303, this step aims at determining, by the executing body, the node type of which the current stateful application copy and the other stateful application copy are respectively selected as based on a preset master-slave node selection mechanism. Namely, under a master-slave selection mechanism of one master and multiple slaves, the selection conditions can be divided into two types according to the node type of which the current stateful application copy is selected as:
One class, when the current stateful application copy is selected as a master node, other stateful application copies are all selected as slave nodes; another class is that the current stateful application copy is selected as a slave node, one of the other stateful application copies is selected as a master node, and the remaining other stateful application copies are also selected as slave nodes.
Step 305: the request processing network is built with other nodes based on the node type.
Based on step 304, the step aims to construct the request processing network by the execution body together with other nodes based on node types, namely when the current stateful application copy serving as the execution body is selected as a master node, the execution body actively performs ad hoc network with each slave node to complete the construction of the request processing network; when the current stateful application copy serving as the execution main body is selected as the slave node, the self-networking is performed with the master node and other slave nodes according to the received self-networking request initiated by the master node, and then the construction of the request processing network is completed.
Specifically, the master-slave node selection mechanism may include: a sequence number rotation mechanism based on a unique copy sequence number (e.g., a rotation mechanism that rotates gradually from a small sequence number to a large sequence number), a voting mechanism based on an election initiation time (i.e., the earlier time which node initiates an election itself as a master node, the more it can be voted as a master node), a selection mechanism based on a copy creation time (e.g., the longer the copy creation time, the higher the probability is selected as a new master node, or the shorter the copy creation time, the higher the probability is selected as a new master node), etc.
The present embodiment provides a node ad hoc network scheme based on similar domain names through steps 301-304, in the hope that the ad hoc network of the request processing network is completed by finding stateful application copies with similar domain names, thereby determining the object of the ad hoc network.
On the basis of any of the above embodiments, in order to embody the effect of the built architecture of the request processing network that is convenient to expand, fig. 4 shows schematic diagrams of two different capacity expansion modes:
Mode one: when the current node is a master node in the request processing network and receives a first capacity expansion instruction, a newly-built stateful application copy can be determined according to the first capacity expansion instruction, then the newly-built stateful application copy is used as a new slave node to join the request processing network, and the actual storage node of the historical state data of different requests is redetermined. I.e. the first expansion instruction indicates that a new stateful application copy is created, equivalent to increasing the number of built nodes from the request processing network, then in order for each node to be responsible for processing part of the request, it is necessary to re-distribute the historical state data that is only present in the original node to all nodes that are present. Specifically, when the transfer of the historical state data is performed, the transfer can be performed based on a minimum transfer principle, so that unnecessary data read-write operation is saved, and the adjustment time consumption is shortened.
Mode two: when the current node is the master node in the request processing network and a second expansion instruction is received, a new resource allocation amount of each stateful application copy may be determined according to the second expansion instruction, and then an actual resource allocation amount of a node served by the stateful application copy is increased to the new resource allocation amount. That is, the second expansion instruction instructs to increase the resource allocation amount (such as the operation performance, the maximum memory occupation amount, the maximum bandwidth, etc.) of each existing node, so as to increase the maximum request processing amount of each node.
In order to deepen understanding, the disclosure further provides a specific implementation scheme in combination with an equipment management scene in an internet of things scene:
the device management platform described in this embodiment is a very huge platform, and is mainly applied to management of mass devices in the scene of the internet of things, and includes functions on services such as data, resources, data semantic definitions and the like of the devices, and in addition, includes double-end communication between all the devices and the cloud, such as device reporting data, cloud issuing data control devices and the like. In order to implement a platform of such a scale, the application is divided by using a micro service manner, and for convenience of explanation, a simplified application structure is shown in fig. 5 a:
The device/request access application is stateless application, all requests have no correlation, and the corresponding operation is directly executed. Its main functions are: rights authentication, on one hand, authenticating which devices can be accessed and which devices cannot be accessed, and on the other hand, authenticating which hundred-degree cloud accounts can access the platform and which cannot be accessed; and distributing the request to other applications according to the interface to complete the realization of the actual function.
The device resource management application, which is a stateless application, is an application modeling for physical devices, and mainly divides the devices, namely, examples/products/devices, in three segments. The instance divides the resources used by the device, the product generalizes the general characteristics of the device, and the device is unique ID of each physical device. Its main functions are: the external resource management of the equipment, the internet of things service provides a plurality of applications, and the external resource is not all resources developed/maintained by the equipment management platform. Some devices may report data by using the internet of things core suite, and related resources need to be opened; the internal resources of the device are managed, and communication is required between each application of the device management platform, generally, the internal resources generally comprise a cache, a message queue, a registry and the like, and the internal resources are required to be managed together by the platform for facilitating the exclusive use of a certain resource by the device.
The core suite application of the internet of things is stateless, the device management platform can be embedded with some functions of the core suite, the functions are irrelevant to the device management platform, and the main functions are as follows: scene linkage service, embedding of the scene linkage service, storing meta-information of the service and the like; rule engine services, embedding of rule engine services, storing meta-information of services, etc.
Device runtime application scheduling is a stateless application. The device resource management application divides the resources used by the device by instance, where the resources are the device runtime applications, and the scheduling refers to artificially changing the device runtime applications that are already in operation. Its main functions are: managing the application in the running process, storing meta-information of the application in the running process, deploying parameters on a Kubernetes platform and the like; the runtime application operates and maintains, provides an interface to directly operate the runtime application, for example, the Kubernetes can sometimes cause that the application cannot be deployed due to insufficient physical resources, and can call the interface to inquire the reason, so as to solve the related problems. Or provide upgrade behavior of the runtime application, e.g., device management platform functionality iterative upgrades, etc.
The only one stateful application is the device management runtime application, whose main functions are: application discovery, namely discovering other copies of the application to determine how many nodes are specifically used for forming a grid network to complete partitioning and fault tolerance of data storage; the functions of data docking, receiving data reported by the equipment and transmitting the data to the equipment are two ways, one is a long-link way, and the other is message subscription. Local computation, the data ordering of all devices is realized, and you Read and Write (Read you Write). At some time, the data can arrive in wrong sequence due to network delay, local calculation is carried out through gathering the data, rearrangement is carried out according to sequence numbers, and the rearranged data is provided for other functions for subsequent processing. And the user can retrieve the shadow of the device at any time when reading and writing the time and attribute value of the last data report of the device. Meanwhile, differential calculation is carried out every time the data of the equipment arrives, data differential information is generated, and the data differential information is pushed to subscribed applications or equipment. And (3) searching equipment, namely, under the current example, the equipment accessed by the runtime application can be searched according to a certain category or a certain keyword of a certain label.
Aiming at the first four applications described above, which are stateless applications, the high extensibility index can be achieved by adding application copies laterally; the high scalability of the stateful application device to manage the runtime application is not so simple, and the peak transaction throughput is difficult to measure because the runtime application needs to handle the reporting/issuing of data by the device under all instances, and if the externalized state is used, the network is too dependent, and when the transaction throughput increases, network bandwidth congestion is caused, which reduces the performance of the runtime application. Therefore, this embodiment proposes a highly scalable architecture that is more general and suitable for stateful applications on cloud native:
step 1: defining a device management runtime application on a cloud primary as StatefulSet resources, wherein the step can enable the application to have a unique name and a copy sequence number when deployed;
step 2: a Service resource is defined HEADLESS SERVICE for a device management runtime application, which is a special Service resource on a Kubernetes (an open source application for managing containerized applications on multiple hosts in a cloud platform) platform, and the actual ClusterIP (Service IP address, which is a virtual IP address, cannot be ping-enabled by an external network, and is only used for Kubernetes cluster internal access). This step may allow the application to own DNS (Domain NAME SYSTEM, translated herein as a Domain name) in the general format:
<staefuleset-number>.<service-name>.<namespace>.svc.cluster.local;
Wherein statefulset-number is the unique copy number generated in step 1.
Step 3: a ServiceAccount resource is added to the device management runtime application, which is an account authority resource on the Kubernetes platform, through which the application can access some APIs of Kubernetes (Application Programming Interface ).
Step 4: and in the device management runtime application, an application discovery function is realized and is used for carrying out self-networking on all the runtime application copies to form a grid network. The method comprises the following specific steps:
a) At the beginning of application start, the DNS universal format is injected into the application in an environment variable;
b) When an application is started, accessing an API of the Kubernetes to obtain the total number of copies of StatefulSet resource configurations, acquiring the number in StatefulSet where the current application copy is located through a host name, and determining the total number of the ad hoc network nodes;
c) According to the total number of the nodes and the number of the nodes obtained in the last step, carrying out partition processing on the equipment state (local calculation function part), in the embodiment, hazelcast (which is a highly-extensible data distribution and cluster platform and can be used for realizing distributed data storage and data caching) is used for processing the storage of the equipment state, setting a data partition strategy to be Hash (Hash) redundancy, namely carrying out Hash calculation on each equipment, carrying out redundancy calculation on the obtained result and the number of the effective nodes, and finally obtaining on which node the equipment data should be calculated;
d) The API accessing Kubernetes obtains the current surviving (active) application node, and if there are no other surviving nodes, the current node becomes the master node in the grid network, and the registration node adds/deletes a listening event, which is an application copy lifecycle callback event of the Kubernetes platform. If other surviving nodes exist, the main node in the grid network performs data repartitioning operation, and informs other nodes to repartition and transmit data;
e) When the application is closed, the master node in the grid network informs other application copy nodes to conduct repartitioning, and the current node migrates all equipment data (namely state) of the current node;
step 5: the device management runtime application is initialized, all devices in the instance are grouped according to the total node number, and the grouping strategy is still consistent with the partitioning strategy, which comprises subscription, release, linkage and the like of the devices.
Step 6: the device management runtime application is already capable of ad hoc networking and shares the status of each other (mutually documented), via step 4.
The device request/link needs to be distributed next, and the device/request access application already realizes the function, which is as follows:
a) When a request of a certain device enters the device/requests to access an application, the application accesses a master node of the device management runtime application in the grid network, and determines the runtime application to which the request should be sent;
b) Distributing the request to the application;
c) A response to the request is returned.
Step 7: if a master node in the grid network is about to shut down, its next sequence number order becomes the master node. If there is no next node, the loop starts from the head node.
Through the above six steps, a high-scalability architecture for a generic cloud native stateful application has been completed.
Device management runtime applications still involve part of the network consumption, but only occur when a node goes online or offline. For example, a device management runtime application runs in dual copies, but the traffic is suddenly increased, has to be extended horizontally, runs in three copies or more, and then a full amount of state data re-partition is performed at this time for data synchronization. The StatefulSet resources limit the starting of each application copy one by one and not all the application copies, so that the flood problem is solved, the peak value of network resource consumption is reduced to be consumed only by each starting, and the peak clipping effect is achieved.
Hazelcast store copies of state data, which in fact remain in their own state, but only migrate to others when the stateful application dies. I.e. the process of externalization of the state data occurs during the time of start/stop of the application, rather than every request. This "sticky session" mechanism thus allows all devices of the instance to be flattened out in all nodes, actual request handling/linking/subscription etc. still occurring on the corresponding application node.
Compared with other nodes, the master node in the grid network has one more state, and each node is internally provided with a state mark for recording whether the master node is the master node or not and recording the application copy serial number of the master node. This has the advantage that any node can give a reply, i.e. forward this query and return, whenever the device/requesting access application queries which node the master node.
Fig. 5b shows the change in the steady operation of the stateful application level expansion. As shown in the left half, the device management runtime application in steady operation has two copies, numbered 0 and 1, respectively. In the application number 0 copy, it stores the state of the device A, B, C, D and related data. In the application number 1 copy, it stores the state of the device E, F, G, H and related data. Where application number 0 copy is the master node in the grid network. After the horizontal expansion occurs, as in the right half of the figure, the application number 2 copy is added to the grid network, at which time the repartitioning of the data occurs and a portion of the data is transferred to the new node. It is worth mentioning that not all data will be moved, e.g. device A, C is still left in copy number 0, only device E is migrated. This non-total data movement also reduces the occupation of performance and network consumption to some extent.
When the total quantity of the equipment is fixed, the quantity of the equipment which needs to be processed of each stateful application copy is thinned through horizontal expansion, and the performance of the application is greatly enhanced. As the transaction throughput of a single device increases, only vertical expansion is required to accomplish performance matching by adding a small portion of hardware resources (CPU, network, memory, etc.).
In summary, the architecture is more scalable than current technology, and can be simply extended to achieve a linear increase in performance.
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present disclosure provides an embodiment of a request processing apparatus, where an embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 6, the request processing apparatus 600 of the present embodiment may include: a pending request receiving unit 601, a target node determining unit 602, and a pending request processing unit 603. Wherein, the pending request receiving unit 601 is configured to receive a pending request initiated by a user; a target node determining unit 602 configured to determine a target node storing history state data of a request to be processed in response to a current node being a master node in a request processing network; the method comprises the steps that a master node and a slave node forming a request processing network are served as different copies of stateful application, history state data of different requests are stored in different nodes in a scattered mode, the copies serving as the nodes run in a memory in a mode of carrying the history state data, and actual storage nodes of the different history state data are recorded in the master node; the pending request processing unit 603 is configured to forward the pending request to the target node, and control the target node to generate response information corresponding to the pending request according to the stored history state information.
In the present embodiment, in the request processing apparatus 600: the specific processing of the pending request receiving unit 601, the target node determining unit 602, and the pending request processing unit 603 and the technical effects thereof may refer to the relevant descriptions of steps 201 to 203 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some optional implementations of this embodiment, the request processing apparatus 600 may further include:
and the pending request forwarding unit is configured to forward the pending request to the master node in response to the current node being a slave node in the request processing network.
In some optional implementations of this embodiment, the request processing apparatus 600 may further include: a request processing network construction unit configured to construct in advance a resulting request processing network, the request processing network construction unit being further configured to:
acquiring the allocated unique identity information;
acquiring an allocated unique domain name; wherein the unique domain name is generated based on the unique identity information;
discovering other domain names having similarity with the unique domain name, and determining other stateful application copies corresponding to the other domain names;
Determining the node types of the current stateful application copy and the other stateful application copies which are respectively selected as based on a preset master-slave node selection mechanism;
the request processing network is built with other nodes based on the node type.
In some alternative implementations of the present embodiment, in response to the unique domain name containing unique identity information, other domain names differ from the unique domain name only in the identity information content respectively recorded by the identity information recording fields that are also contained.
In some optional implementations of this embodiment, the master-slave node selection mechanism includes: a serial number rotation mechanism based on a unique copy serial number, a voting mechanism based on an election initiating time and a selection mechanism based on a copy creation time.
In some optional implementations of this embodiment, the request processing apparatus 600 may further include:
the newly built stateful application copy determining unit is configured to determine a newly built stateful application copy according to a first capacity expansion instruction in response to the current node being a master node in a request processing network and receiving the first capacity expansion instruction;
the new slave node joining processing unit is configured to join the newly created stateful application copy as a new slave node joining request processing network and redetermine the actual depositing node of the history state data of the different requests.
In some optional implementations of this embodiment, the request processing apparatus 600 may further include:
A new resource allocation amount determining unit configured to determine a new resource allocation amount of the stateful application copy according to a second capacity expansion instruction in response to the current node being a master node in the request processing network and receiving the second capacity expansion instruction;
A resource allocation amount increasing unit configured to increase an actual resource allocation amount of the node served by the stateful application copy to a new resource allocation amount.
In some optional implementations of this embodiment, the request processing apparatus 600 may further include:
And the response information return unit is configured to control the target node to return the response information to the user initiating the pending request.
The present embodiment exists as an embodiment of a device corresponding to the foregoing embodiment of the method, where the request processing device provided in this embodiment creates, in advance, a plurality of copies of the stateful application for a stateful application that needs to store historical state data for responding to a received request to be processed, and uses the copies as nodes to construct a request processing network in a grid form, and is assisted with a master-slave mechanism and a distributed storage mechanism, so as to form a framework for processing the request of the stateful application with high expansibility, and for an incoming request to be processed, the master node may forward the incoming request to a target node storing the historical state data of the request to be processed, so that the target node is used to complete a response to the request to be processed, and while having basic request processing, the framework may also increase the overall request processing capability and performance by adding more copies of the stateful application or increasing the amount of resources allocated for each copy.
According to an embodiment of the present disclosure, the present disclosure further provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor to enable the at least one processor to implement the request processing method described in any of the embodiments above when executed.
According to an embodiment of the present disclosure, there is also provided a readable storage medium storing computer instructions for enabling a computer to implement the request processing method described in any of the above embodiments when executed.
According to an embodiment of the present disclosure, the present disclosure further provides a computer program product, which, when executed by a processor, is capable of implementing the steps of the request processing method described in any of the above embodiments.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, for example, a request processing method. For example, in some embodiments, the request processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When a computer program is loaded into RAM 703 and executed by computing unit 701, one or more steps of the request processing method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the request processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of large management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) PRIVATE SERVER service.
According to the technical scheme of the embodiment of the disclosure, for the stateful application which needs to store the historical state data for responding to the received request to be processed, a plurality of copies of the stateful application are created in advance, the copies are used as nodes to construct a grid-form request processing network, a master-slave mechanism and a distributed storage mechanism are used as auxiliary materials to form a framework for processing the request of the stateful application, the incoming request to be processed can be forwarded to a target node for storing the historical state data of the request to be processed through the master node, so that the response to the request to be processed is completed by the target node, and the framework can also improve the overall request processing capacity and performance by adding more stateful application copies or adding the amount of resources allocated for each copy while having basic request processing.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.
Claims (19)
1. A request processing method, comprising:
Receiving a to-be-processed request initiated by a user;
determining a target node for storing historical state data of a request to be processed in response to the current node being a master node in a request processing network; the master node and the slave nodes forming the request processing network are served as different copies of stateful application, history state data of different requests are stored in different nodes in a scattered mode, the copies serving as the nodes run in a memory in a mode of carrying the history state data, and the master node records actual storage nodes of the different history state data;
and forwarding the request to be processed to the target node, and controlling the target node to generate response information corresponding to the request to be processed according to the stored historical state information.
2. The method of claim 1, further comprising:
And responding the current node as a slave node in the request processing network, and forwarding the pending request to the master node.
3. The method of claim 1, wherein pre-building the request processing network comprises:
Acquiring unique identity information which serves as a node and is allocated with a stateful application copy;
Acquiring a unique domain name which serves as a node and to which a stateful application copy is allocated; wherein the unique domain name is generated based on the unique identity information;
discovering other domain names having similarity to the unique domain name and determining other stateful application copies corresponding to the other domain names;
determining the node types of the current stateful application copy and the other stateful applications which are respectively selected as based on a preset master-slave node selection mechanism;
And constructing the request processing network with other nodes based on the node type.
4. A method according to claim 3, wherein in response to the unique domain name containing the unique identity information, the other domain names differ from the unique domain name only in the identity information content respectively recorded by identity information recording fields that are also contained.
5. A method according to claim 3, wherein the master-slave node selection mechanism comprises: a serial number rotation mechanism based on a unique copy serial number, a voting mechanism based on an election initiating time and a selection mechanism based on a copy creation time.
6. The method of claim 1, further comprising:
responding to the current node as a master node in the request processing network, receiving a first capacity expansion instruction, and determining a newly-built stateful application copy according to the first capacity expansion instruction;
And taking the newly-built stateful application copy as a new slave node to join the request processing network, and redefining the actual storage node of the history state data of different requests.
7. The method of claim 1, further comprising:
responding to the current node as a master node in the request processing network and receiving a second capacity expansion instruction, and determining new resource allocation amount of the stateful application copy according to the second capacity expansion instruction;
the actual resource allocation amount of the node served by the stateful application copy is increased to the new resource allocation amount.
8. The method of any of claims 1-7, further comprising:
And controlling the target node to return the response information to the user initiating the pending request.
9. A request processing apparatus comprising:
the device comprises a pending request receiving unit configured to receive a pending request initiated by a user;
A target node determining unit configured to determine a target node storing history state data of a request to be processed in response to a current node being a master node in a request processing network; the master node and the slave nodes forming the request processing network are served as different copies of stateful application, history state data of different requests are stored in different nodes in a scattered mode, the copies serving as the nodes run in a memory in a mode of carrying the history state data, and the master node records actual storage nodes of the different history state data;
And the pending request processing unit is configured to forward the pending request to the target node, and control the target node to generate response information corresponding to the pending request according to the stored historical state information.
10. The apparatus of claim 9, further comprising:
and the pending request forwarding unit is configured to forward the pending request to the master node in response to the current node being a slave node in the request processing network.
11. The apparatus of claim 9, further comprising: a request processing network construction unit configured to construct in advance the request processing network, the request processing network construction unit being further configured to:
Acquiring unique identity information which serves as a node and is allocated with a stateful application copy;
Acquiring a unique domain name which serves as a node and to which a stateful application copy is allocated; wherein the unique domain name is generated based on the unique identity information;
discovering other domain names having similarity to the unique domain name and determining other stateful application copies corresponding to the other domain names;
determining the node types of the current stateful application copy and the other stateful applications which are respectively selected as based on a preset master-slave node selection mechanism;
And constructing the request processing network with other nodes based on the node type.
12. The apparatus of claim 11, wherein, in response to the unique domain name containing the unique identity information, the other domain names differ from the unique domain name only in the identity information content respectively recorded by identity information recording fields that are also contained.
13. The apparatus of claim 11, wherein the master-slave node selection mechanism comprises: a serial number rotation mechanism based on a unique copy serial number, a voting mechanism based on an election initiating time and a selection mechanism based on a copy creation time.
14. The apparatus of claim 9, further comprising:
The newly-built stateful application copy determining unit is configured to determine a newly-built stateful application copy according to a first capacity expansion instruction in response to the fact that a current node is a master node in the request processing network and the first capacity expansion instruction is received;
And the new slave node joining processing unit is configured to join the newly-built stateful application copy as a new slave node to the request processing network and redetermine the actual storage node of the history state data of the different requests.
15. The apparatus of claim 9, further comprising:
A new resource allocation amount determining unit configured to determine a new resource allocation amount of the stateful application copy according to a second capacity expansion instruction in response to a current node being a master node in the request processing network and receiving the second capacity expansion instruction;
A resource allocation amount increasing unit configured to increase an actual resource allocation amount of a node served by the stateful application copy to the new resource allocation amount.
16. The apparatus of any of claims 9-15, further comprising:
And the response information returning unit is configured to control the target node to return the response information to the user initiating the pending request.
17. An electronic device, comprising:
At least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the request processing method of any one of claims 1-8.
18. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the request processing method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the request processing method according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211492758.2A CN115883653B (en) | 2022-11-25 | 2022-11-25 | Request processing method, request processing device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211492758.2A CN115883653B (en) | 2022-11-25 | 2022-11-25 | Request processing method, request processing device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115883653A CN115883653A (en) | 2023-03-31 |
CN115883653B true CN115883653B (en) | 2024-11-05 |
Family
ID=85764019
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211492758.2A Active CN115883653B (en) | 2022-11-25 | 2022-11-25 | Request processing method, request processing device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115883653B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102035886A (en) * | 2006-11-09 | 2011-04-27 | 微软公司 | Consistency within a federation infrastructure |
CN109074277A (en) * | 2016-03-31 | 2018-12-21 | 微软技术许可有限责任公司 | Stateful dynamic link is enabled in mobile application |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9785510B1 (en) * | 2014-05-09 | 2017-10-10 | Amazon Technologies, Inc. | Variable data replication for storage implementing data backup |
CN111935244B (en) * | 2020-07-20 | 2022-11-29 | 江苏安超云软件有限公司 | Service request processing system and super-integration all-in-one machine |
CN111897822A (en) * | 2020-08-27 | 2020-11-06 | 平安银行股份有限公司 | Account state information processing method and device, electronic equipment and storage medium |
CN114827274B (en) * | 2022-04-15 | 2024-10-15 | 支付宝(杭州)信息技术有限公司 | Request processing method and device |
-
2022
- 2022-11-25 CN CN202211492758.2A patent/CN115883653B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102035886A (en) * | 2006-11-09 | 2011-04-27 | 微软公司 | Consistency within a federation infrastructure |
CN109074277A (en) * | 2016-03-31 | 2018-12-21 | 微软技术许可有限责任公司 | Stateful dynamic link is enabled in mobile application |
Also Published As
Publication number | Publication date |
---|---|
CN115883653A (en) | 2023-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10482102B2 (en) | Conditional master election in distributed databases | |
US9053167B1 (en) | Storage device selection for database partition replicas | |
US8788760B2 (en) | Adaptive caching of data | |
US10275489B1 (en) | Binary encoding-based optimizations at datastore accelerators | |
US11818209B2 (en) | State management and object storage in a distributed cloud computing network | |
US10482062B1 (en) | Independent evictions from datastore accelerator fleet nodes | |
CN112162846B (en) | Transaction processing method, device and computer readable storage medium | |
US10146814B1 (en) | Recommending provisioned throughput capacity for generating a secondary index for an online table | |
US10102230B1 (en) | Rate-limiting secondary index creation for an online table | |
US10747739B1 (en) | Implicit checkpoint for generating a secondary index of a table | |
US12032550B2 (en) | Multi-tenant partitioning in a time-series database | |
US10158709B1 (en) | Identifying data store requests for asynchronous processing | |
CN103312624A (en) | Message queue service system and method | |
CN112685499B (en) | Method, device and equipment for synchronizing flow data of working service flow | |
US11023291B2 (en) | Synchronization between processes in a coordination namespace | |
CN111338806A (en) | Service control method and device | |
WO2021017907A1 (en) | Method and device for optimized inter-microservice communication | |
US10146833B1 (en) | Write-back techniques at datastore accelerators | |
US9898614B1 (en) | Implicit prioritization to rate-limit secondary index creation for an online table | |
US11526516B2 (en) | Method, apparatus, device and storage medium for generating and processing a distributed graph database | |
CN108933813B (en) | Method, system and storage medium for preventing reader starvation | |
CN115883653B (en) | Request processing method, request processing device, electronic equipment and storage medium | |
CN114610740B (en) | Data version management method and device of medical data platform | |
CN115587119A (en) | Database query method and device, electronic equipment and storage medium | |
WO2021232860A1 (en) | Communication method, apparatus and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |