[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN104618493A - Data request processing method and device - Google Patents

Data request processing method and device Download PDF

Info

Publication number
CN104618493A
CN104618493A CN201510077187.XA CN201510077187A CN104618493A CN 104618493 A CN104618493 A CN 104618493A CN 201510077187 A CN201510077187 A CN 201510077187A CN 104618493 A CN104618493 A CN 104618493A
Authority
CN
China
Prior art keywords
request
thread
server
preset duration
new
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510077187.XA
Other languages
Chinese (zh)
Inventor
沈建荣
谭国斌
马哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510077187.XA priority Critical patent/CN104618493A/en
Publication of CN104618493A publication Critical patent/CN104618493A/en
Pending legal-status Critical Current

Links

Landscapes

  • Computer And Data Communications (AREA)

Abstract

The invention relates to a data request processing method and device, the function of which is that when a server reaches its maximum capacity in processing data requests, the server can reject receiving new requests and process the new requests through other effective ways. The data request processing method includes: distributing the data acquiring requests to threads and opening the threads to process the requests; when the quantity of the working threads is identical to the first quantity, if new requests are received, processing the new requests through the threads in the non-working mode or a second server in the preset time after receiving the new requests. According to the data request processing method and device, when the quantity of the working threads reaches the maximum, the new requests are rejected and the threads in the non-working mode or other servers will process the non-overtime new requests, so the received new requests can still be processed even when the server reaches its maximum processing capacity, and the efficiency of the server to process the requests is improved.

Description

Data request processing method and device
Technical field
The disclosure relates to Internet technical field, particularly relates to data request processing method and device.
Background technology
Web service sets up the new platform of the distributed application program of interoperable as one, very responsive to time delay.When front end constantly sends request to server; if the volume peaks of request does not exceed the maximum processing capability of server background data processing; then server can be in good operating state; and once the volume peaks of front end request has exceeded the maximum processing capability of server background data processing; and server does not do any corresponding safeguard measure; this gets more and more with regard to making the backlog of requests added up, and the TIMEOUT causing history to add up reaches certain scale, forms vicious circle.Server all can be invalid because of time-out to the process that these TIMEOUT carry out, and the service ability externally presented also can be zero, and this situation cannot be recovered automatically.Server can only alleviate this situation by restarting, but the server after restarting still can be taken by surging front end request, thus causes server to transship again, is absorbed in the vicious circle of " overload---restart ".
For above-mentioned Web service institute problems faced, take a kind of mode emptying request queue at present.Specifically exactly the front end request received all is saved in queue, when a lot of requests of request stacking to a certain extent and in queue all time-out time, just can take the mode emptying request queue.This method can by taking certain monitor mode to realize, the request of such as simulant-client, some requests are sent at set intervals to server, if major part request can normally return, explanation back-end processing system is normal, if major part request all time-out, then illustrates that background processing system is paralysed, now just need to empty request queue, temporarily to process the situation of request peak period.
But, this mode is cured the symptoms, not the disease, because the number of requests of server background process is certain, when when asking peak period, even if empty request queue, still can continue to receive the front end request come tumbling in a large number, cause a large amount of blocking and make servers go down, thus again be absorbed in the vicious circle of " overload---restart ".
Summary of the invention
For overcoming Problems existing in correlation technique, disclosure embodiment provides a kind of data request processing method and device, in order to when the ability of server process request of data reaches maximum, new request can be rejected and the request adopting other effective means process new.
According to the first aspect of disclosure embodiment, a kind of data request processing method is provided, comprises:
By obtaining the request dispatching of data in thread, open described thread process described request;
When in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilize request new described in idle thread or second server process; Wherein, described first number is be the unlatching thread maximum number of described first server setting in advance, and described idle thread is the thread changing to idle condition in described in running order thread from operating state.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect:
The program is when in running order number of threads reaches maximum, reject new request and utilize idle thread or other servers to process the new request of not time-out, make server reach maximum processing capability after still can process the new request received, improve the efficiency of server processing requests.
In one embodiment, before described in running order number of threads equals the first number, described method also comprises:
When in running order number of threads equals the second number, continue to receive request, described second number is less than described first number;
The request continuing to receive is added in bounded queue, in the process of adding, if there is described idle thread, then the request dispatching in bounded queue is processed to the idle thread occurred;
When the number of requests in described bounded queue equals predetermined number, request dispatching in described bounded queue is processed to presetting thread, until in running order number of threads equals the first number, the idle thread that described default thread comprises appearance or the thread be not unlocked.
In this embodiment, adopt the stepped method of salary distribution to process in the request dispatching received to thread, improve the efficiency of server processing requests.
In one embodiment, described when in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilize request new described in idle thread or second server process, comprising:
When in running order number of threads equals the first number and number of requests in described bounded queue equals predetermined number, if receive new request, in preset duration then after receiving described request newly, utilize request new described in idle thread or second server process.
In this embodiment, when in running order number of threads reaches maximum number and number of requests in bounded queue also equals default maximum quantity, the request that refusal process is new, and adopt idle thread or other servers to process the new request of not time-out, make server reach maximum processing capability after still can process the new request received, improve the efficiency of server processing requests.
In one embodiment, receive new request if described, then, in the preset duration after receiving described request newly, utilize request new described in idle thread process, comprising:
In preset duration after receiving described request newly, record described request newly, and whether monitoring there is described idle thread;
When there is described idle thread in described preset duration, the new request dispatching of described record is processed to the idle thread occurred;
When there is not described idle thread in described preset duration, delete the new request of described record.
In this embodiment, idle thread process is utilized not exceed the request of preset duration, and the request exceeding preset duration is deleted, make server reach maximum processing capability after still can process the new request received, not only improve the efficiency of server processing requests, avoid returning time-out request for a long time to user simultaneously.
In one embodiment, receive new request if described, then, in the preset duration after receiving described request newly, utilize request new described in idle thread process, comprising:
In preset duration after receiving described request newly, record described request newly, and whether the number of requests of monitoring in described bounded queue is less than described predetermined number;
When in described preset duration, the number of requests monitored in described bounded queue is less than described predetermined number, the new request of described record is added in described bounded queue, in the process of adding, if there is described idle thread, then the request dispatching in described bounded queue is processed to the idle thread occurred;
When monitoring the number of requests in described bounded queue and equal described predetermined number all the time in described preset duration, delete the new request of described record.
This embodiment utilizes bounded queue and the new request of idle thread process, makes server can process request in maximum efficiency, and avoids returning time-out request for a long time to user.
In one embodiment, receive new request if described, then, in the preset duration after receiving described request newly, utilize request new described in second server process, comprising:
In preset duration after receiving described request newly, process described new request forward to described second server.
In this embodiment, utilize other server to process the new request received, make server reach maximum processing capability after still can process the new request received, improve the efficiency of server processing requests.
In one embodiment, described method also comprises:
When the number of times of the described request newly of second server refusal exceedes preset times described in each in the whole described second server preset, send the information that the number of described second server is very few.
In one embodiment, the described request dispatching of data of obtaining is in thread, and after opening described thread process described request, described method also comprises:
When in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, off-line download server is utilized to carry out processed offline to described request newly.
In this embodiment, when requested many take the mode of processed offline, make server reach maximum processing capability after still by the new request that other mode process receive, improve the efficiency of server processing requests.
According to the second aspect of disclosure embodiment, a kind of data request processing device is provided, for first server, comprises:
Opening module, for by obtaining the request dispatching of data in thread, opens described thread process described request;
First processing module, for when in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilizes request new described in idle thread or second server process; Wherein, described first number is be the unlatching thread maximum number of described first server setting in advance, and described idle thread is the thread changing to idle condition in described in running order thread from operating state.
In one embodiment, described device also comprises:
Receiver module, for when in running order number of threads equals the second number, continues to receive request, and described second number is less than described first number;
Add module, for the request continuing to receive is added in bounded queue, in the process of adding, if there is described idle thread, then the request dispatching in bounded queue is processed to the idle thread occurred;
Distribution module, for when the number of requests in described bounded queue equals predetermined number, request dispatching in described bounded queue is processed to presetting thread, until in running order number of threads equals the first number, the idle thread that described default thread comprises appearance or the thread be not unlocked.
In one embodiment, described first processing module comprises:
Process submodule, for equal the first number when in running order number of threads and number of requests in described bounded queue equals predetermined number time, if receive new request, in preset duration then after receiving described request newly, utilize request new described in idle thread or second server process.
In one embodiment, described first processing module comprises:
First record sub module, in the preset duration after receiving described request newly, records described request newly, and whether monitoring occurs described idle thread;
Distribution sub module, for when there is described idle thread in described preset duration, processes the new request dispatching of described record to the idle thread occurred;
First deletes submodule, for when there is not described idle thread in described preset duration, deletes the new request of described record.
In one embodiment, described first processing module comprises:
Second record sub module, in the preset duration after receiving described request newly, records described request newly, and whether the number of requests of monitoring in described bounded queue is less than described predetermined number;
Add submodule, for when in described preset duration, the number of requests monitored in described bounded queue is less than described predetermined number, the new request of described record is added in described bounded queue, in the process of adding, if there is described idle thread, then the request dispatching in described bounded queue is processed to the idle thread occurred;
Second deletes submodule, for when monitoring the number of requests in described bounded queue and equal described predetermined number all the time in described preset duration, deletes the new request of described record.
In one embodiment, described first processing module comprises:
Forward submodule, in the preset duration after receiving described request newly, process described new request forward to described second server.
In one embodiment, described device also comprises:
Reminding module, for when the number of times of the described request newly of second server refusal exceedes preset times described in each in the whole described second server preset, sends the information that the number of described second server is very few.
In one embodiment, described device also comprises:
Second processing module, for when in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilizes off-line download server to carry out processed offline to described request newly.
According to the third aspect of disclosure embodiment, a kind of data request processing device is provided, comprises:
Processor;
For the memory of storage of processor executable instruction;
Wherein, described processor is configured to:
By obtaining the request dispatching of data in thread, open described thread process described request;
When in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilize request new described in idle thread or second server process; Wherein, described first number is be the unlatching thread maximum number of described first server setting in advance, and described idle thread is the thread changing to idle condition in described in running order thread from operating state.
Technique scheme, server is when in running order number of threads reaches maximum, reject new request and utilize idle thread or other servers to process the new request of not time-out, make server reach maximum processing capability after still can process the new request received, improve the efficiency of server processing requests.
Should be understood that, it is only exemplary and explanatory that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in specification and to form the part of this specification, shows and meets embodiment of the present disclosure, and is used from specification one and explains principle of the present disclosure.
Fig. 1 is the flow chart of a kind of data request processing method according to an exemplary embodiment.
Fig. 2 is the flow chart of the another kind of data request processing method according to an exemplary embodiment.
Fig. 3 is the flow chart of a kind of data request processing method according to an exemplary embodiment one.
Fig. 4 is the flow chart of a kind of data request processing method according to an exemplary embodiment two.
Fig. 5 is the block diagram of a kind of data request processing device according to an exemplary embodiment.
Fig. 6 is the block diagram of the another kind of data request processing device according to an exemplary embodiment.
Fig. 7 is the block diagram of the first processing module in a kind of data request processing device according to an exemplary embodiment.
Fig. 8 is the block diagram of the first processing module in a kind of data request processing device according to an exemplary embodiment.
Fig. 9 is the block diagram of the first processing module in a kind of data request processing device according to an exemplary embodiment.
Figure 10 is the block diagram of the first processing module in a kind of data request processing device according to an exemplary embodiment.
Figure 11 is the block diagram of the another kind of data request processing device according to an exemplary embodiment.
Figure 12 is the block diagram being applicable to data request processing device according to an exemplary embodiment.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Execution mode described in following exemplary embodiment does not represent all execution modes consistent with the disclosure.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that aspects more of the present disclosure are consistent.
Disclosure embodiment provides a kind of data request processing method, and the method can be used in first server, as shown in Figure 1, and the method comprising the steps of S101-S102:
In step S101, by obtaining the request dispatching of data in thread, open thread process request.
In step s 102, when in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving new request, utilize idle thread or the new request of second server process.
Wherein, the first number is be the unlatching thread maximum number of first server setting in advance, and idle thread is the thread changing to idle condition in running order thread from operating state.
The number of threads of server sets according to the service behaviour of server.Such as, be set with 60 number of threads in the thread pool of first server altogether, then the first number in this embodiment just equals 60.Arranging preset duration is concerning user, waited for the request needed for a long time and no longer in order to avoid processing some.When the request received also not serviced device process in preset duration, then server will abandon this request.
The technical scheme that disclosure embodiment provides, server is when in running order number of threads reaches maximum, reject new request and utilize idle thread or other servers to process the new request of not time-out, make server reach maximum processing capability after still can process the new request received, improve the efficiency of server processing requests.
In one embodiment, as shown in Figure 2, before implementation step S102, the further comprising the steps of S201-S203 of said method:
In step s 201, when in running order number of threads equals the second number, continue to receive request, the second number is less than described first number;
In step S202, the request continuing to receive is added in bounded queue, in the process of adding, if there is idle thread, then the request dispatching in bounded queue is processed to the idle thread occurred;
In step S203, when the number of requests in bounded queue equals predetermined number, request dispatching in bounded queue is processed to presetting thread, until in running order number of threads equals the first number, the idle thread that default thread comprises appearance or the thread be not unlocked.
This embodiment, after having implemented step S203, can continue implementation step S102.
The time order and function order that request in bounded queue receives request according to server arranges, and the object arranging the second number is to enable server process request in maximum efficiency, thus returns the request after process with the fastest speed forward end user.In the thread pool of server, there is the second quantity thread in running order, then when again receiving request, just the request received is added in bounded queue, until when the request number added in bounded queue reaches predetermined number, according to putting in order of asking in bounded queue, processed to the default thread of server by request dispatching successively, default thread is here the thread in server except the in running order thread of the second number in step S201.Now, the request in multiple thread process bounded queue can be distributed simultaneously, also can be after the requests have been received, distribute a thread process and be arranged in the most front request of bounded queue, so just the request newly received can be added in bounded queue, until there has been the first number thread in running order in server, and in bounded queue, ask number full, then server can reject new request, adopts the new request that the method process of step S102 is sent.
Can be learnt by above-described embodiment, step S102 can also be embodied as following steps: when in running order number of threads equals the first number and number of requests in bounded queue equals predetermined number, if receive new request, in preset duration then after receiving new request, utilize idle thread or the new request of second server process.
In one embodiment, step S102 may be embodied as following steps A1-A3: in steps A 1, in the preset duration after receiving new request, records new request, and whether monitoring occurs idle thread; In steps A 2, when occurring idle thread in preset duration, the new request dispatching of record is processed to the idle thread occurred; In steps A 3, when there is not idle thread in preset duration, the new request of deletion record.Wherein, preset duration is for each request, and such as, preset duration is 10 seconds, for one of them request, if when the duration of the serviced device record of this request was more than 10 seconds, then server deletes the record of this request.
In one embodiment, after server receives new request, according to the request number in bounded queue, new request is added in bounded queue, therefore, step S102 can also be embodied as following steps B1-B3: in step bl is determined., in preset duration after receiving new request, record new request, and whether the number of requests of monitoring in bounded queue is less than predetermined number; In step B2, when monitoring the number of requests in bounded queue and be less than predetermined number in preset duration, the new request of record is added in bounded queue, in the process of adding, if there is idle thread, then the request dispatching in bounded queue is processed to the idle thread occurred; In step B3, when monitoring the number of requests in bounded queue and equal predetermined number all the time in preset duration, the new request of deletion record.
In one embodiment, step S102 can also be embodied as following steps: in the preset duration after receiving new request, is processed by new request forward to second server.Wherein, second server can be offline service device, also can be the server identical with the first server that above-described embodiment is carried, and namely second server also can take the method processing request in above-described embodiment to process request.In this embodiment, when the number of times that each second server in the whole second servers preset refuses new request exceedes preset times, send the information that the number of second server is very few.
In one embodiment, said method is further comprising the steps of: when in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving new request, off-line download server is utilized to carry out processed offline to new request.
The method of a kind of data request processing that the disclosure provides is described below by specific embodiment.
Embodiment one
In this embodiment, data request processing method is used in server, when number of threads in running order in server equals the first number, server can reject new request, and when having vacant position in bounded queue, just new request is added in bounded queue, as shown in Figure 3, comprises the following steps S301-S307:
In step S301, by obtaining the request dispatching of data in thread, open thread process request.
In step s 302, when in running order number of threads equals the second number, the request continuing to receive is added in bounded queue; Wherein, the second number is less than the first number.Such as, the second number is set to 30, and the first number is set to 60 according to the service behaviour of server.In this step, if there is the thread of the request of processing in 30 of server threads, the request dispatching in bounded queue can be processed to this thread.
In step S303, when the number of requests in bounded queue equals predetermined number, the request dispatching in bounded queue is processed, until in running order number of threads equals the first number to presetting thread.Wherein, the default thread idle thread that comprises appearance or the thread that is not unlocked.Such as, predetermined number is the 20, first number is 60, then when the request number arranged in bounded queue equals 20, according to putting in order of asking in bounded queue, successively request dispatching is processed, until 60 threads are all in running order in server to presetting thread.
In step s 304, in the preset duration after receiving new request, record new request.
In step S305, monitor in preset duration and judge whether the number of requests in bounded queue is less than predetermined number; If the number of requests monitored in preset duration in bounded queue is less than predetermined number, then perform step S306; If the number of requests in preset duration in bounded queue equals predetermined number all the time, then perform step S307.Such as, when occurring idle thread, the request in bounded queue can be assigned in idle thread and process, and now, there will be vacant position in bounded queue, the number of requests namely in bounded queue is less than predetermined number.
In step S306, the new request of record is added in bounded queue.When performing this step, the new request of record is also arrange according to the time sequencing receiving request, and if there is idle thread, is then processed to the idle thread occurred by the request dispatching arranged in bounded queue.
In step S307, the new request of deletion record.
The technical scheme that the present embodiment provides, by adopting stepped method of salary distribution process request, make server can process request in maximum efficiency, and number of threads in running order is in the server when reaching maximum, reject new request and utilize idle thread to process not overtime new request, make server reach maximum processing capability after still can process the new request received, improve the efficiency of server processing requests.
Embodiment two
In this embodiment, data request processing method is used in first server, when number of threads in running order in first server equals the first number, server can reject new request, and new request dispatching is processed to second server, second server can be offline service device, also can be the server identical with first server.As shown in Figure 4, S401-S406 is comprised the following steps:
In step S401, by obtaining the request dispatching of data in thread, open thread process request.
In step S402, when in running order number of threads equals the second number, the request continuing to receive is added in bounded queue; Wherein, the second number is less than the first number.
In step S403, when the number of requests in bounded queue equals predetermined number, the request dispatching in bounded queue is processed, until in running order number of threads equals the first number to presetting thread.Wherein, the default thread idle thread that comprises appearance or the thread that is not unlocked.
In step s 404, in the preset duration after receiving new request, new request forward is processed to second server.
In step S405, receive the information of the refusal process request that second server returns.Such as, second server is the server identical with first server, when all threads in second server thread pool are all in running order, and the request number arranged in bounded queue has reached the quantity that second server is preset, then can return the information of refusal process request to first server.
In step S406, when the number of times that each second server in the whole second servers preset refuses new request exceedes preset times, send the information that the number of second server is very few.Such as, have 5 second servers in system, distribute for first server the new request of coming, if the number of times of this request of each server refusal process in these 5 second servers reaches preset times, then in illustrative system, number of servers is very few.
The technical scheme that the present embodiment provides, by adopting stepped method of salary distribution process request, make server can process request in maximum efficiency, and number of threads in running order is in the server when reaching maximum, reject new request and utilize second server to process not overtime new request, make server reach maximum processing capability after still can process the new request received, improve the efficiency of server processing requests.
The above-mentioned data request processing installation method that corresponding disclosure embodiment provides, disclosure embodiment also provides a kind of data request processing device, and as shown in Figure 5, this device comprises opening module 51 and the first processing module 52:
Opening module 51 is configured to, by obtaining the request dispatching of data in thread, open thread processes said request;
First processing module 52 is configured to when in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving new request, utilizes idle thread or the new request of second server process; Wherein, the first number is be the unlatching thread maximum number of first server setting in advance, and idle thread is the thread changing to idle condition in running order thread from operating state.
In one embodiment, as shown in Figure 6, said apparatus also comprises receiver module 53, adds module 54 and distribution module 55:
Receiver module 53 is configured to when in running order number of threads equals the second number, and continue to receive request, the second number is less than the first number;
Add module 54 to be configured to the request continuing to receive to be added in bounded queue, in the process of adding, if there is idle thread, then the request dispatching in bounded queue is processed to the idle thread occurred;
Distribution module 55 is configured to when the number of requests in bounded queue equals predetermined number, request dispatching in bounded queue is processed to presetting thread, until in running order number of threads equals the first number, the idle thread that default thread comprises appearance or the thread be not unlocked.
In one embodiment, as shown in Figure 7, first processing module 52 comprises process submodule 521, this process submodule 521 is configured to when in running order number of threads equals the first number and number of requests in bounded queue equals predetermined number, if receive new request, in preset duration then after receiving new request, utilize idle thread or the new request of second server process.
In one embodiment, as shown in Figure 8, the first processing module 52 comprises the first record sub module 522, and distribution sub module 523 and first deletes submodule 524:
First record sub module 522 is configured to, in the preset duration after receiving new request, record new request, and whether monitoring occurs idle thread;
Distribution sub module 523 is configured to when there is idle thread in preset duration, is processed by the new request dispatching of record to the idle thread occurred;
First deletes submodule 524 is configured to when there is not idle thread in preset duration, the new request of deletion record.
In one embodiment, as shown in Figure 9, the first processing module 52 comprises the second record sub module 525, adds submodule 526 and second and deletes submodule 527:
Second record sub module 525 is configured to, in the preset duration after receiving new request, record new request, and whether the number of requests of monitoring in bounded queue is less than predetermined number;
Adding submodule 526 is configured to when monitoring the number of requests in bounded queue and be less than predetermined number in preset duration, the new request of record is added in bounded queue, in the process of adding, if there is idle thread, then the request dispatching in bounded queue is processed to the idle thread occurred;
Second deletes submodule 527 is configured to when monitoring the number of requests in bounded queue and equal predetermined number all the time in preset duration, the new request of deletion record.
In one embodiment, as shown in Figure 10, the first processing module 52 comprises forwarding submodule 528, and this forwarding submodule 528 is configured to, in the preset duration after receiving new request, be processed by new request forward to second server.
In one embodiment, as shown in figure 11, said apparatus also comprises reminding module 56, this reminding module 56 is configured to, when the number of times that each second server in the whole second servers preset refuses new request exceedes preset times, send the information that the number of second server is very few.Said apparatus also comprises the second processing module 57, this second processing module 57 is configured to when in running order number of threads equals the first number, if receive new request, in preset duration then after receiving new request, off-line download server is utilized to carry out processed offline to new request.
In one embodiment, the disclosure also provides a kind of data request processing device, comprising:
Processor;
For the memory of storage of processor executable instruction;
Wherein, processor is configured to:
By obtaining the request dispatching of data in thread, open described thread process described request;
When in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilize request new described in idle thread or second server process; Wherein, described first number is be the unlatching thread maximum number of described first server setting in advance, and described idle thread is the thread changing to idle condition in described in running order thread from operating state.
About the device in above-described embodiment, wherein the concrete mode of modules executable operations has been described in detail in about the embodiment of the method, will not elaborate explanation herein.
Figure 12 is the block diagram of a kind of device 1900 for data request processing according to an exemplary embodiment.Such as, device 1900 may be provided in a server.With reference to Figure 12, device 1900 comprises processing components 1922, and it comprises one or more processor further, and the memory resource representated by memory 1932, can such as, by the instruction of the execution of processing components 1922, application program for storing.The application program stored in memory 1932 can comprise each module corresponding to one group of instruction one or more.In addition, processing components 1922 is configured to perform instruction, to perform the above method.
Device 1900 can also comprise the power management that a power supply module 1926 is configured to final controlling element 1900, and a wired or wireless network interface 1950 is configured to device 1900 to be connected to network, and input and output (I/O) interface 1958.Device 1900 can operate the operating system based on being stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the memory 1932 of instruction, above-mentioned instruction can perform said method by the processing components 1922 of device 1900.Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is performed by the processing components of server, make server can perform a kind of data request processing method, described method comprises:
By obtaining the request dispatching of data in thread, open described thread process described request;
When in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilize request new described in idle thread or second server process; Wherein, described first number is be the unlatching thread maximum number of described first server setting in advance, and described idle thread is the thread changing to idle condition in described in running order thread from operating state.
Before described in running order number of threads equals the first number, described method also comprises:
When in running order number of threads equals the second number, continue to receive request, described second number is less than described first number;
The request continuing to receive is added in bounded queue, in the process of adding, if there is described idle thread, then the request dispatching in bounded queue is processed to the idle thread occurred;
When the number of requests in described bounded queue equals predetermined number, request dispatching in described bounded queue is processed to presetting thread, until in running order number of threads equals the first number, the idle thread that described default thread comprises appearance or the thread be not unlocked.
Described when in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilize request new described in idle thread or second server process, comprising:
When in running order number of threads equals the first number and number of requests in described bounded queue equals predetermined number, if receive new request, in preset duration then after receiving described request newly, utilize request new described in idle thread or second server process.
Receive new request if described, then, in the preset duration after receiving described request newly, utilize request new described in idle thread process, comprising:
In preset duration after receiving described request newly, record described request newly, and whether monitoring there is described idle thread;
When there is described idle thread in described preset duration, the new request dispatching of described record is processed to the idle thread occurred;
When there is not described idle thread in described preset duration, delete the new request of described record.
Receive new request if described, then, in the preset duration after receiving described request newly, utilize request new described in idle thread process, comprising:
In preset duration after receiving described request newly, record described request newly, and whether the number of requests of monitoring in described bounded queue is less than described predetermined number;
When in described preset duration, the number of requests monitored in described bounded queue is less than described predetermined number, the new request of described record is added in described bounded queue, in the process of adding, if there is described idle thread, then the request dispatching in described bounded queue is processed to the idle thread occurred;
When monitoring the number of requests in described bounded queue and equal described predetermined number all the time in described preset duration, delete the new request of described record.
Receive new request if described, then, in the preset duration after receiving described request newly, utilize request new described in second server process, comprising:
In preset duration after receiving described request newly, process described new request forward to described second server.
Described method also comprises:
When the number of times of the described request newly of second server refusal exceedes preset times described in each in the whole described second server preset, send the information that the number of described second server is very few.
The described request dispatching of data of obtaining is in thread, and after opening described thread process described request, described method also comprises:
When in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, off-line download server is utilized to carry out processed offline to described request newly.
Those skilled in the art, at consideration specification and after putting into practice disclosed herein disclosing, will easily expect other embodiment of the present disclosure.The application is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Specification and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.

Claims (17)

1. a data request processing method, for first server, is characterized in that, comprising:
By obtaining the request dispatching of data in thread, open described thread process described request;
When in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilize request new described in idle thread or second server process; Wherein, described first number is be the unlatching thread maximum number of described first server setting in advance, and described idle thread is the thread changing to idle condition in described in running order thread from operating state.
2. the method for claim 1, is characterized in that,
Before described in running order number of threads equals the first number, described method also comprises:
When in running order number of threads equals the second number, continue to receive request, described second number is less than described first number;
The request continuing to receive is added in bounded queue, in the process of adding, if there is described idle thread, then the request dispatching in bounded queue is processed to the idle thread occurred;
When the number of requests in described bounded queue equals predetermined number, request dispatching in described bounded queue is processed to presetting thread, until in running order number of threads equals the first number, the idle thread that described default thread comprises appearance or the thread be not unlocked.
3. method as claimed in claim 2, is characterized in that,
Described when in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilize request new described in idle thread or second server process, comprising:
When in running order number of threads equals the first number and number of requests in described bounded queue equals predetermined number, if receive new request, in preset duration then after receiving described request newly, utilize request new described in idle thread or second server process.
4. the method for claim 1, is characterized in that,
Receive new request if described, then, in the preset duration after receiving described request newly, utilize request new described in idle thread process, comprising:
In preset duration after receiving described request newly, record described request newly, and whether monitoring there is described idle thread;
When there is described idle thread in described preset duration, the new request dispatching of described record is processed to the idle thread occurred;
When there is not described idle thread in described preset duration, delete the new request of described record.
5. the method for claim 1, is characterized in that,
Receive new request if described, then, in the preset duration after receiving described request newly, utilize request new described in idle thread process, comprising:
In preset duration after receiving described request newly, record described request newly, and whether the number of requests of monitoring in described bounded queue is less than described predetermined number;
When in described preset duration, the number of requests monitored in described bounded queue is less than described predetermined number, the new request of described record is added in described bounded queue, in the process of adding, if there is described idle thread, then the request dispatching in described bounded queue is processed to the idle thread occurred;
When monitoring the number of requests in described bounded queue and equal described predetermined number all the time in described preset duration, delete the new request of described record.
6. the method for claim 1, is characterized in that,
Receive new request if described, then, in the preset duration after receiving described request newly, utilize request new described in second server process, comprising:
In preset duration after receiving described request newly, process described new request forward to described second server.
7. method as claimed in claim 6, is characterized in that,
Described method also comprises:
When the number of times of the described request newly of second server refusal exceedes preset times described in each in the whole described second server preset, send the information that the number of described second server is very few.
8. the method for claim 1, is characterized in that,
The described request dispatching of data of obtaining is in thread, and after opening described thread process described request, described method also comprises:
When in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, off-line download server is utilized to carry out processed offline to described request newly.
9. a data request processing device, for first server, is characterized in that, comprising:
Opening module, for by obtaining the request dispatching of data in thread, opens described thread process described request;
First processing module, for when in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilizes request new described in idle thread or second server process; Wherein, described first number is be the unlatching thread maximum number of described first server setting in advance, and described idle thread is the thread changing to idle condition in described in running order thread from operating state.
10. device as claimed in claim 9, it is characterized in that, described device also comprises:
Receiver module, for when in running order number of threads equals the second number, continues to receive request, and described second number is less than described first number;
Add module, for the request continuing to receive is added in bounded queue, in the process of adding, if there is described idle thread, then the request dispatching in bounded queue is processed to the idle thread occurred;
Distribution module, for when the number of requests in described bounded queue equals predetermined number, request dispatching in described bounded queue is processed to presetting thread, until in running order number of threads equals the first number, the idle thread that described default thread comprises appearance or the thread be not unlocked.
11. devices as claimed in claim 10, it is characterized in that, described first processing module comprises:
Process submodule, for equal the first number when in running order number of threads and number of requests in described bounded queue equals predetermined number time, if receive new request, in preset duration then after receiving described request newly, utilize request new described in idle thread or second server process.
12. devices as claimed in claim 9, it is characterized in that, described first processing module comprises:
First record sub module, in the preset duration after receiving described request newly, records described request newly, and whether monitoring occurs described idle thread;
Distribution sub module, for when there is described idle thread in described preset duration, processes the new request dispatching of described record to the idle thread occurred;
First deletes submodule, for when there is not described idle thread in described preset duration, deletes the new request of described record.
13. devices as claimed in claim 9, it is characterized in that, described first processing module comprises:
Second record sub module, in the preset duration after receiving described request newly, records described request newly, and whether the number of requests of monitoring in described bounded queue is less than described predetermined number;
Add submodule, for when in described preset duration, the number of requests monitored in described bounded queue is less than described predetermined number, the new request of described record is added in described bounded queue, in the process of adding, if there is described idle thread, then the request dispatching in described bounded queue is processed to the idle thread occurred;
Second deletes submodule, for when monitoring the number of requests in described bounded queue and equal described predetermined number all the time in described preset duration, deletes the new request of described record.
14. devices as claimed in claim 9, it is characterized in that, described first processing module comprises:
Forward submodule, in the preset duration after receiving described request newly, process described new request forward to described second server.
15. devices as claimed in claim 14, it is characterized in that, described device also comprises:
Reminding module, for when the number of times of the described request newly of second server refusal exceedes preset times described in each in the whole described second server preset, sends the information that the number of described second server is very few.
16. devices as claimed in claim 9, it is characterized in that, described device also comprises:
Second processing module, for when in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilizes off-line download server to carry out processed offline to described request newly.
17. 1 kinds of data request processing devices, is characterized in that, comprising:
Processor;
For the memory of storage of processor executable instruction;
Wherein, described processor is configured to:
By obtaining the request dispatching of data in thread, open described thread process described request;
When in running order number of threads equals the first number, if receive new request, then in the preset duration after receiving described request newly, utilize request new described in idle thread or second server process; Wherein, described first number is be the unlatching thread maximum number of described first server setting in advance, and described idle thread is the thread changing to idle condition in described in running order thread from operating state.
CN201510077187.XA 2015-02-12 2015-02-12 Data request processing method and device Pending CN104618493A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510077187.XA CN104618493A (en) 2015-02-12 2015-02-12 Data request processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510077187.XA CN104618493A (en) 2015-02-12 2015-02-12 Data request processing method and device

Publications (1)

Publication Number Publication Date
CN104618493A true CN104618493A (en) 2015-05-13

Family

ID=53152768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510077187.XA Pending CN104618493A (en) 2015-02-12 2015-02-12 Data request processing method and device

Country Status (1)

Country Link
CN (1) CN104618493A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106302570A (en) * 2015-05-14 2017-01-04 阿里巴巴集团控股有限公司 A kind of request processing method and device
CN106412079A (en) * 2016-10-20 2017-02-15 福建天泉教育科技有限公司 Request processing method and system
CN107204875A (en) * 2017-05-11 2017-09-26 腾讯科技(深圳)有限公司 Data reporting links monitoring method, device, electronic equipment and storage medium
CN107357640A (en) * 2017-06-30 2017-11-17 北京奇虎科技有限公司 Request processing method and device, the electronic equipment in multi-thread data storehouse
CN109117279A (en) * 2018-06-29 2019-01-01 Oppo(重庆)智能科技有限公司 The method that is communicated between electronic device and its limiting process, storage medium
CN111475387A (en) * 2019-01-24 2020-07-31 阿里巴巴集团控股有限公司 Server overload judgment method and server
WO2020164612A1 (en) * 2019-02-15 2020-08-20 贵州白山云科技股份有限公司 Smart hotspot scattering method and apparatus, storage medium, and computer device
CN111784533A (en) * 2020-06-16 2020-10-16 洪江川 Information analysis method based on artificial intelligence and big data and cloud computing platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325561A (en) * 2007-06-12 2008-12-17 阿里巴巴集团控股有限公司 Method, apparatus and system for processing electronic mail
CN102541659A (en) * 2011-12-30 2012-07-04 重庆新媒农信科技有限公司 Method and device for processing of server service requests
US20130263146A1 (en) * 2007-08-28 2013-10-03 Red Hat, Inc. Event driven sendfile
CN103605571A (en) * 2013-11-20 2014-02-26 国家电网公司 Control method of database connection pool

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101325561A (en) * 2007-06-12 2008-12-17 阿里巴巴集团控股有限公司 Method, apparatus and system for processing electronic mail
US20130263146A1 (en) * 2007-08-28 2013-10-03 Red Hat, Inc. Event driven sendfile
CN102541659A (en) * 2011-12-30 2012-07-04 重庆新媒农信科技有限公司 Method and device for processing of server service requests
CN103605571A (en) * 2013-11-20 2014-02-26 国家电网公司 Control method of database connection pool

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
关新全: "Java多线程 线程池", 《HTTP://BLOG.SINA.CN/DPOOL/BLOG/S/BLOG_616E189F0100RUYA.HTML》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106302570A (en) * 2015-05-14 2017-01-04 阿里巴巴集团控股有限公司 A kind of request processing method and device
CN106412079A (en) * 2016-10-20 2017-02-15 福建天泉教育科技有限公司 Request processing method and system
CN106412079B (en) * 2016-10-20 2019-04-16 福建天泉教育科技有限公司 Request processing method and system
CN107204875A (en) * 2017-05-11 2017-09-26 腾讯科技(深圳)有限公司 Data reporting links monitoring method, device, electronic equipment and storage medium
CN107204875B (en) * 2017-05-11 2022-08-23 腾讯科技(深圳)有限公司 Data reporting link monitoring method and device, electronic equipment and storage medium
CN107357640A (en) * 2017-06-30 2017-11-17 北京奇虎科技有限公司 Request processing method and device, the electronic equipment in multi-thread data storehouse
CN109117279B (en) * 2018-06-29 2020-10-02 Oppo(重庆)智能科技有限公司 Electronic device, method for limiting inter-process communication thereof and storage medium
CN109117279A (en) * 2018-06-29 2019-01-01 Oppo(重庆)智能科技有限公司 The method that is communicated between electronic device and its limiting process, storage medium
CN111475387A (en) * 2019-01-24 2020-07-31 阿里巴巴集团控股有限公司 Server overload judgment method and server
WO2020164612A1 (en) * 2019-02-15 2020-08-20 贵州白山云科技股份有限公司 Smart hotspot scattering method and apparatus, storage medium, and computer device
US11562042B2 (en) 2019-02-15 2023-01-24 Guizhou Baishancloud Technology Co., Ltd. Intelligent hotspot scattering method, apparatus, storage medium, and computer device
CN111784533A (en) * 2020-06-16 2020-10-16 洪江川 Information analysis method based on artificial intelligence and big data and cloud computing platform
CN111784533B (en) * 2020-06-16 2021-04-27 无限数联网络科技(北京)有限公司 Information analysis method based on artificial intelligence and big data and cloud computing platform

Similar Documents

Publication Publication Date Title
CN104618493A (en) Data request processing method and device
CN106375420B (en) Server cluster intelligent monitoring system and method based on load balancing
EP3335120B1 (en) Method and system for resource scheduling
CN107832126B (en) Thread adjusting method and terminal thereof
CN108205541B (en) Method and device for scheduling distributed web crawler tasks
CN110941481A (en) Resource scheduling method, device and system
CN110858843B (en) Service request processing method and device and computer readable storage medium
CN104618693A (en) Cloud computing based online processing task management method and system for monitoring video
US20140019613A1 (en) Job management server and job management method
JP2001331333A5 (en)
WO2010145429A1 (en) Method and system for managing thread pool
CN103366022B (en) Information handling system and disposal route thereof
CN104504147B (en) A kind of resource coordination method of data-base cluster, apparatus and system
US9448920B2 (en) Granting and revoking supplemental memory allocation requests
CN109766172B (en) Asynchronous task scheduling method and device
WO2016029790A1 (en) Data transmission method and device
CN112231108A (en) Task processing method and device, computer readable storage medium and server
CN110049084B (en) Current limiting method, device and equipment of distributed system
CN109597674B (en) Shared virtual resource pool share scheduling method and system
CN103475520B (en) Service processing control method and device in distribution network
US8001341B2 (en) Managing dynamically allocated memory in a computer system
JP2007328413A (en) Method for distributing load
CN106021026B (en) Backup method and device
CN110096352B (en) Process management method, device and computer readable storage medium
JP6823257B2 (en) Job monitoring program, job monitoring device and job monitoring method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20150513

RJ01 Rejection of invention patent application after publication