[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117742923A - Distributed concurrency request control method, device, equipment and medium - Google Patents

Distributed concurrency request control method, device, equipment and medium Download PDF

Info

Publication number
CN117742923A
CN117742923A CN202311806274.5A CN202311806274A CN117742923A CN 117742923 A CN117742923 A CN 117742923A CN 202311806274 A CN202311806274 A CN 202311806274A CN 117742923 A CN117742923 A CN 117742923A
Authority
CN
China
Prior art keywords
concurrent
request
queue
new
concurrency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311806274.5A
Other languages
Chinese (zh)
Other versions
CN117742923B (en
Inventor
钱威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Shuhe Information Technology Co Ltd
Original Assignee
Shanghai Shuhe Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Shuhe Information Technology Co Ltd filed Critical Shanghai Shuhe Information Technology Co Ltd
Priority to CN202311806274.5A priority Critical patent/CN117742923B/en
Publication of CN117742923A publication Critical patent/CN117742923A/en
Application granted granted Critical
Publication of CN117742923B publication Critical patent/CN117742923B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to a distributed concurrency request control method, a device, equipment and a medium. The method comprises the following steps: an application step of submitting a new concurrency request; an acquisition step, namely acquiring a concurrent queue; deleting step, deleting the overtime concurrent request in the concurrent queue; a calculation step of calculating the request quantity in the concurrent queue; and a processing step of judging whether to accept the new concurrent request according to the request quantity. The problem that the concurrency is not released due to downtime after the concurrency is acquired is solved by deleting the overtime concurrency request in the concurrency queue, and the current concurrency is not deleted due to overtime through the renewal task.

Description

Distributed concurrency request control method, device, equipment and medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a medium for controlling a distributed concurrency request.
Background
The existing current limiting methods generally include token bucket, leakage bucket, sliding window and other current limiting methods, wherein the current limiting methods are the number of requests accepted by a system in a certain unit time, and the longer the time, the more requests enter the system. In a scenario where the response of the request is slow, i.e. the processing time of the system is long, the amount of requests processed simultaneously in the system is increasing over time, which may eventually lead to a system crash. These restrictive approaches are similar to park entrances limiting the number of people per second, but given the long play time of guests, this can lead to park overload. Therefore, the current flow limiting mode is easy to generate full-load running-through in some cases.
Disclosure of Invention
Based on this, in order to solve the above problems, a distributed concurrency request control method, apparatus, device and medium are presented herein. The problem that the concurrency is not released due to downtime after the concurrency is acquired is solved by deleting the overtime concurrency request in the concurrency queue, and the current concurrency is not deleted due to overtime through the renewal task.
According to a first aspect of the present invention, there is provided a distributed concurrency request control method, including the steps of:
an application step of submitting a new concurrency request;
an acquisition step, namely acquiring a concurrent queue;
deleting step, deleting the overtime concurrent request in the concurrent queue;
a calculation step of calculating the request quantity in the concurrent queue;
and a processing step of judging whether to accept the new concurrent request according to the request quantity.
In some embodiments, the method further comprises a renewal step, wherein the renewal task in the application updates the current concurrency request time so that the current concurrency acquired by the application is not deleted due to timeout.
In some embodiments, the deleting the timeout concurrency request in the concurrency queue includes: setting timeout time, subtracting the request time of the concurrent request from the current time to obtain the existence time, and deleting the concurrent request with the existence time greater than the timeout time in the concurrent queue.
In some embodiments, determining whether to accept the new concurrent request based on the request amount includes: and comparing the request quantity with a set threshold value, accepting the new concurrent request if the request quantity is smaller than the set threshold value, and otherwise, not accepting the new concurrent request.
In some embodiments, the request time and the unique ID of the concurrent request are stored in the concurrent queue, and insertion, deletion and query of the concurrent request in the concurrent queue are realized through the unique ID.
In some embodiments, the processing step comprises: classifying the new concurrent requests, and classifying the new concurrent requests into three categories of high priority, medium priority and low priority according to the priority; dividing the concurrent queue into a concurrent queue A and a concurrent queue B;
when the new concurrent request is of high priority, judging whether the number of the concurrent requests in the concurrent queue A reaches the set upper limit of the concurrent queue A, and if not, entering the new concurrent request into the concurrent queue A; otherwise, judging whether the quantity of concurrent requests in the concurrent queue B reaches the set upper limit of the concurrent queue B, and if not, entering a new concurrent request into the concurrent queue B; otherwise, the new concurrent request enters a deferred processing queue, and enters a concurrent queue A or a concurrent queue B when the number of the concurrent requests in the concurrent queue A or the concurrent queue B is smaller than the set upper limit;
when the new concurrent request is in the priority, judging whether the number of the concurrent requests in the concurrent queue A reaches the set upper limit of the concurrent queue A, and if not, entering the new concurrent request into the concurrent queue A; otherwise, rejecting the new concurrent request;
when the new concurrent request is low in priority, judging whether the quantity of the concurrent requests in the concurrent queue A reaches a set value of the concurrent queue A, and if the quantity of the concurrent requests does not reach the set value, entering the new concurrent request into the concurrent queue A; otherwise, rejecting the new concurrent request;
and the set value of the concurrent queue A is smaller than the set upper limit of the concurrent queue A.
According to a second aspect of the present invention, there is provided a distributed concurrency request control device comprising:
the application module is used for submitting a new concurrency request;
the acquisition module is used for acquiring the concurrent queue;
the deleting module is used for deleting the overtime concurrent request in the concurrent queue;
the calculation module is used for calculating the request quantity in the concurrent queue;
and the processing module is used for judging whether to accept the new concurrent request according to the request quantity.
In some embodiments, the method further includes a renewal module for updating the current concurrency request time by a renewal task in the application so that the current concurrency acquired by the application is not deleted due to a timeout.
According to a third aspect of the present invention there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterised in that the processor implements the steps of any of the methods of the embodiments described above when the computer program is executed by the processor.
According to a fourth aspect of the present invention there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of any of the methods of the embodiments described above.
By implementing the scheme of the invention, the following beneficial effects can be obtained:
1. when the application is abnormally downtime after concurrency is acquired, the problem that concurrency cannot be released due to the fact that the downtime can be automatically cleaned under the condition that the application is released before the downtime. And after the concurrency is acquired by the application, the request time is updated regularly, so that the request time is ensured not to be cleaned by other threads due to timeout.
2. And the concurrent request time and the global unique ID are stored in the concurrent queue, so that the application service can still delete the concurrent data of the downtime according to the overtime parameter and the request time data in the queue in the downtime scene. The globally unique id can provide the capability of quick insertion, deletion and inquiry, and can meet the high performance of a concurrent control system.
3. And the concurrent requests are processed in a grading manner, so that important tasks are preferentially processed under the condition of high load, and the stability of the system is ensured.
Drawings
FIG. 1 is a flow chart of some embodiments of a distributed concurrency request control method of the present invention;
FIG. 2 is a timing diagram of some embodiments of a distributed concurrency request control method of the present invention;
FIG. 3 is a schematic diagram of a concurrent queue data structure design according to some embodiments of the invention;
FIG. 4 is a schematic diagram of some embodiments of a distributed concurrency request control device of the present invention;
FIG. 5 is an internal block diagram of a computer device for implementing some embodiments of the invention.
Detailed Description
Embodiments of the present invention will now be described more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. The terms used herein should be construed to have meanings consistent with their meanings in the context of the specification and relevant art and are not to be construed in an idealized or overly formal sense unless expressly so defined herein.
FIG. 1 illustrates a flow chart of some embodiments of a distributed concurrency request control method of the present invention.
As shown in fig. 1, the method includes:
step S102, submitting a new concurrency request;
an acquisition step S104, which is to acquire a concurrent queue;
in some embodiments, the concurrency queue is obtained from the concurrency keys. The concurrency key is a corresponding parameter which is transmitted according to the requirement of limiting concurrency, for example: the concurrency is limited to a certain interface, and the unique ID of the interface is transmitted in; concurrency is limited to all interfaces of an application, then the unique ID of the application is entered.
Step S106 of deleting, namely deleting the overtime concurrent request in the concurrent queue;
in some embodiments, the deleting the timeout concurrency request in the concurrency queue includes: setting timeout time, subtracting the request time of the concurrent request from the current time to obtain the existence time, and deleting the concurrent request with the existence time greater than the timeout time in the concurrent queue.
A calculation step S108, which calculates the request quantity in the concurrent queue;
and a processing step S110, judging whether to accept the new concurrent request according to the request quantity.
In some embodiments, determining whether to accept the new concurrent request based on the request amount includes: and comparing the request quantity with a set threshold value, accepting the new concurrent request if the request quantity is smaller than the set threshold value, and otherwise, not accepting the new concurrent request.
In addition, in order to ensure that the concurrency acquired by the application is not deleted due to timeout, in some embodiments, the method further includes a renewal step of adding a renewal task in a background timing task of the application, updating the request time in the concurrency control system at intervals, and ensuring that the current concurrency is not timeout and is not mistakenly considered that the concurrency is released due to the zombie application.
In some embodiments, the request time and the unique ID of the concurrent request are stored in the concurrent queue, and insertion, deletion and query of the concurrent request in the concurrent queue are realized through the unique ID. The unique ID may use a UUID to uniquely tag the request.
In addition, the method aims at the situation that when the number of concurrent requests exceeds the normal processing capacity of the system, the normal operation of the system is affected. And designing grading treatment for the new concurrent request, setting a corresponding system load threshold according to different actual application scenes, and starting the grading treatment when the load of a system for processing the concurrent request exceeds the set threshold.
The grading treatment specifically comprises the following steps: classifying the new concurrent requests, and classifying the new concurrent requests into three categories of high priority, medium priority and low priority according to the priority; dividing the concurrent queue into a concurrent queue A and a concurrent queue B; when the new concurrent request is of high priority, judging whether the number of the concurrent requests in the concurrent queue A reaches the set upper limit of the concurrent queue A, and if not, entering the new concurrent request into the concurrent queue A; otherwise, judging whether the quantity of concurrent requests in the concurrent queue B reaches the set upper limit of the concurrent queue B, and if not, entering a new concurrent request into the concurrent queue B; otherwise, the new concurrent request enters a deferred processing queue, and enters a concurrent queue A or a concurrent queue B when the number of the concurrent requests in the concurrent queue A or the concurrent queue B is smaller than the set upper limit; when the new concurrent request is in the priority, judging whether the number of the concurrent requests in the concurrent queue A reaches the set upper limit of the concurrent queue A, and if not, entering the new concurrent request into the concurrent queue A; otherwise, rejecting the new concurrent request; when the new concurrent request is low in priority, judging whether the quantity of the concurrent requests in the concurrent queue A reaches a set value of the concurrent queue A, and if the quantity of the concurrent requests does not reach the set value, entering the new concurrent request into the concurrent queue A; otherwise, rejecting the new concurrent request; and the set value of the concurrent queue A is smaller than the set upper limit of the concurrent queue A.
The priority classification can be performed according to the importance of concurrent requests, and the priority classification has high importance and low importance. In addition, the upper limit of each of the concurrent queue a and the concurrent queue B can be adjusted according to the actual application scenario.
By the hierarchical processing, the concurrent request with high priority can be preferentially processed under the condition of higher system load, the concurrent request with high priority cannot be refused, hierarchical management is realized on different concurrent requests, and the stability of system operation is improved.
Fig. 2 illustrates a timing diagram of some embodiments of a distributed concurrency request control method of the present invention.
As shown in fig. 2, a user submits a request to an application service, and the application service applies for a concurrency amount to a concurrency control system, and the concurrency control system feeds back a concurrency application failure or a concurrency application success to the application service. If the feedback is that the concurrent application fails, the application service feeds back a refusal request to the user; if the feedback is successful in the concurrent application, the application service processes the user request and carries out timing renewal on the time of the concurrent request, after the user request is processed, the renewal task is deleted and a concurrency amount is released, and the successful information is fed back to the user. The timing renewal is realized through a renewal task of the application service, and a concurrent key generated by the application service and a unique ID (identity) uniquely marking the request are used as two input parameters and transmitted to a concurrent control system to update the request time. And releasing the concurrency, and transmitting the concurrency key and the unique ID for uniquely marking the request to the concurrency control system as two incoming parameters, wherein after the application service finishes processing the service request, the concurrency control system finds a concurrency queue through the concurrency key and deletes the concurrency request data in the queue by using the global unique ID.
FIG. 3 is a schematic diagram of a concurrent queue data structure design according to some embodiments of the invention, as shown in FIG. 3, the queue being an ordered queue ordered by time. The deletion of the timeout request and the release of the concurrency can be conveniently performed. Such a data structure may be implemented in a ZSET data structure of Redis, where the request time may be stored in a score in the ZSET, requesting that the globally unique id be stored in a module of the ZSET.
According to the invention, through the design of the distributed concurrency request control method and the storage structure of the concurrency control system queue, the problem that concurrency cannot be released due to downtime can be automatically cleared. The method can actively clean out the overtime concurrency data, and after the concurrency is acquired by the application, the request time is updated periodically, so that the task of updating the request time periodically is ensured not to be cleaned up by other threads due to the overtime, and the application deletes the task of updating the request time periodically before releasing the concurrency. The concurrent request time and the global unique ID are stored in the queue, and data of (current time-request time) > overtime time is deleted, so that the application service can still delete the concurrent data of the downtime according to the overtime time parameter and the request time data in the queue in the downtime scene.
FIG. 4 illustrates a schematic diagram of some embodiments of a distributed concurrency request control device of the present invention; as shown in fig. 4, the distributed concurrency request control device in the embodiment includes:
an application module 100, configured to submit a new concurrency request;
an obtaining module 200, configured to obtain a concurrency queue;
a deleting module 300, configured to delete a timeout concurrency request in the concurrency queue;
a calculation module 400, configured to calculate a request amount in the concurrent queue;
and the processing module 500 is configured to determine whether to accept the new concurrent request according to the request amount.
For a specific limitation of a distributed concurrency request control device, reference may be made to the limitation of a distributed concurrency request control method hereinabove, and the description thereof will not be repeated here. Each of the modules in the above-described one kind of distributed concurrency request control device may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
The invention also provides a computer device, which can be a terminal, and the internal structure diagram of the computer device can be shown in fig. 5. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program, when executed by a processor, implements the distributed concurrency request control method described above. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like. It will be appreciated by those skilled in the art that the structure shown in FIG. 5 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
The invention also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the distributed concurrency request control method described above.
Those skilled in the art will appreciate that implementing all or part of the above-described method embodiments may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the method embodiments described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus (Rambus), direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
Thus, embodiments of the present invention have been described in detail. In order to avoid obscuring the concepts of the invention, some details known in the art have not been described. How to implement the solutions disclosed herein will be fully apparent to those skilled in the art from the above description.
While certain specific embodiments of the invention have been described in detail by way of example, it will be appreciated by those skilled in the art that the above examples are for illustration only and are not intended to limit the scope of the invention. It will be understood by those skilled in the art that the foregoing embodiments may be modified and equivalents substituted for elements thereof without departing from the scope and spirit of the invention. The scope of the invention is defined by the appended claims.

Claims (10)

1. A distributed concurrency request control method, comprising the steps of:
an application step of submitting a new concurrency request;
an acquisition step, namely acquiring a concurrent queue;
deleting step, deleting the overtime concurrent request in the concurrent queue;
a calculation step of calculating the request quantity in the concurrent queue;
and a processing step of judging whether to accept the new concurrent request according to the request quantity.
2. The method of claim 1, wherein,
and the method further comprises a renewal step, wherein the renewal task in the application updates the current concurrency request time so that the current concurrency acquired by the application cannot be deleted due to timeout.
3. The method of claim 1, wherein,
the deleting the overtime concurrent request in the concurrent queue comprises the following steps: setting timeout time, subtracting the request time of the concurrent request from the current time to obtain the existence time, and deleting the concurrent request with the existence time greater than the timeout time in the concurrent queue.
4. The method of claim 1, wherein,
the step of judging whether to accept the new concurrent request according to the request amount comprises the following steps: and comparing the request quantity with a set threshold value, accepting the new concurrent request if the request quantity is smaller than the set threshold value, and otherwise, not accepting the new concurrent request.
5. The method of claim 1, wherein,
and the concurrent queue stores the request time and the unique ID of the concurrent request, and realizes the insertion, deletion and inquiry of the concurrent request in the concurrent queue through the unique ID.
6. The method of claim 1, wherein,
the processing step comprises the following steps: classifying the new concurrent requests, and classifying the new concurrent requests into three categories of high priority, medium priority and low priority according to the priority; dividing the concurrent queue into a concurrent queue A and a concurrent queue B;
when the new concurrent request is of high priority, judging whether the number of the concurrent requests in the concurrent queue A reaches the set upper limit of the concurrent queue A, and if not, entering the new concurrent request into the concurrent queue A; otherwise, judging whether the quantity of concurrent requests in the concurrent queue B reaches the set upper limit of the concurrent queue B, and if not, entering a new concurrent request into the concurrent queue B; otherwise, the new concurrent request enters a deferred processing queue, and enters a concurrent queue A or a concurrent queue B when the number of the concurrent requests in the concurrent queue A or the concurrent queue B is smaller than the set upper limit;
when the new concurrent request is in the priority, judging whether the number of the concurrent requests in the concurrent queue A reaches the set upper limit of the concurrent queue A, and if not, entering the new concurrent request into the concurrent queue A; otherwise, rejecting the new concurrent request;
when the new concurrent request is low in priority, judging whether the quantity of the concurrent requests in the concurrent queue A reaches a set value of the concurrent queue A, and if the quantity of the concurrent requests does not reach the set value, entering the new concurrent request into the concurrent queue A; otherwise, rejecting the new concurrent request;
and the set value of the concurrent queue A is smaller than the set upper limit of the concurrent queue A.
7. A distributed concurrency request control device, comprising:
the application module is used for submitting a new concurrency request;
the acquisition module is used for acquiring the concurrent queue;
the deleting module is used for deleting the overtime concurrent request in the concurrent queue;
the calculation module is used for calculating the request quantity in the concurrent queue;
and the processing module is used for judging whether to accept the new concurrent request according to the request quantity.
8. The distributed concurrency request control device of claim 7,
the method also comprises a renewal module, which is used for updating the current concurrency request time through renewal tasks in the application so that the current concurrency acquired by the application is not deleted due to timeout.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 6 when the computer program is executed by the processor.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
CN202311806274.5A 2023-12-26 2023-12-26 Distributed concurrency request control method, device, equipment and medium Active CN117742923B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311806274.5A CN117742923B (en) 2023-12-26 2023-12-26 Distributed concurrency request control method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311806274.5A CN117742923B (en) 2023-12-26 2023-12-26 Distributed concurrency request control method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN117742923A true CN117742923A (en) 2024-03-22
CN117742923B CN117742923B (en) 2024-09-13

Family

ID=90282883

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311806274.5A Active CN117742923B (en) 2023-12-26 2023-12-26 Distributed concurrency request control method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN117742923B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106302809A (en) * 2016-09-20 2017-01-04 天津海量信息技术股份有限公司 A kind of server performance optimization method and system
CN110401697A (en) * 2019-06-26 2019-11-01 苏州浪潮智能科技有限公司 A kind of method, system and the equipment of concurrent processing HTTP request
CN110858158A (en) * 2018-08-23 2020-03-03 北京京东金融科技控股有限公司 Distributed task scheduling method and device, electronic equipment and storage medium
CN111061556A (en) * 2019-12-26 2020-04-24 深圳前海环融联易信息科技服务有限公司 Optimization method and device for executing priority task, computer equipment and medium
CN111367693A (en) * 2020-03-13 2020-07-03 苏州浪潮智能科技有限公司 Method, system, device and medium for scheduling plug-in tasks based on message queue
CN113342498A (en) * 2021-06-28 2021-09-03 平安信托有限责任公司 Concurrent request processing method, device, server and storage medium
CN115292012A (en) * 2022-07-25 2022-11-04 平安银行股份有限公司 Thread pool management method and system, intelligent terminal and storage medium
CN115357363A (en) * 2022-08-30 2022-11-18 阿里巴巴(中国)有限公司 Current limiting method and device, task response system, electronic equipment and storage medium
CN115421889A (en) * 2022-09-05 2022-12-02 迷你创想科技(深圳)有限公司 Inter-process request management method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106302809A (en) * 2016-09-20 2017-01-04 天津海量信息技术股份有限公司 A kind of server performance optimization method and system
CN110858158A (en) * 2018-08-23 2020-03-03 北京京东金融科技控股有限公司 Distributed task scheduling method and device, electronic equipment and storage medium
CN110401697A (en) * 2019-06-26 2019-11-01 苏州浪潮智能科技有限公司 A kind of method, system and the equipment of concurrent processing HTTP request
CN111061556A (en) * 2019-12-26 2020-04-24 深圳前海环融联易信息科技服务有限公司 Optimization method and device for executing priority task, computer equipment and medium
CN111367693A (en) * 2020-03-13 2020-07-03 苏州浪潮智能科技有限公司 Method, system, device and medium for scheduling plug-in tasks based on message queue
CN113342498A (en) * 2021-06-28 2021-09-03 平安信托有限责任公司 Concurrent request processing method, device, server and storage medium
CN115292012A (en) * 2022-07-25 2022-11-04 平安银行股份有限公司 Thread pool management method and system, intelligent terminal and storage medium
CN115357363A (en) * 2022-08-30 2022-11-18 阿里巴巴(中国)有限公司 Current limiting method and device, task response system, electronic equipment and storage medium
CN115421889A (en) * 2022-09-05 2022-12-02 迷你创想科技(深圳)有限公司 Inter-process request management method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN117742923B (en) 2024-09-13

Similar Documents

Publication Publication Date Title
CN111191221B (en) Configuration method and device of authority resources and computer readable storage medium
US8448166B2 (en) Automated state migration while deploying an operating system
US20130311742A1 (en) Image management method, mobile terminal and computer storage medium
US7886053B1 (en) Self-management of access control policy
US8533242B2 (en) File management method in web storage system
CN110598380B (en) User right management method, device, computer equipment and storage medium
WO2020156135A1 (en) Method and device for processing access control policy and computer-readable storage medium
US20180046488A1 (en) Thin client system, method, and non-transitory computer-readable storage medium
CN112115167A (en) Cache system hot spot data access method, device, equipment and storage medium
JP6655731B2 (en) Self-protection security device based on system environment and user behavior analysis and its operation method
CN112905556B (en) Directory lease management method, device, equipment and storage medium for distributed system
US20200314109A1 (en) Time-based server access
CN111858588B (en) Distributed application index service platform and data processing method
US8725887B2 (en) License management system and function providing device
CN117742923B (en) Distributed concurrency request control method, device, equipment and medium
CN110554914B (en) Resource lock management method, device, server and storage medium
CN112084021A (en) Interface configuration method, device and equipment of education system and readable storage medium
CN106294090A (en) A kind of data statistical approach and device
CN112433779B (en) Application site preloading method, device and storage medium based on ERP system
CN114840492A (en) Big data based information processing method and device and readable storage medium
CN113037692A (en) Website anti-blocking method and system with limited access times
CN116720172B (en) Verification method and device for system permission, computer equipment and readable storage medium
CN118468320B (en) Data authority control method and system
US20240037260A1 (en) Qubit-implemented role-based access control
US20240201870A1 (en) Data sharing system and data sharing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant