A kind of socket implementation method towards the Internet of Things platform
Technical field
The present invention is the design and optimization scheme of the communication module of a kind of Socket based on the Internet of Things platform (socket) server, belongs to the communication technical field of Internet of Things.
Background technology
Along with the continuous development of technology of Internet of things, transducer, RFID (radio frequency identification, Radio Frequency Identification) have obtained using widely.The data of being collected by sensing layer will be transferred to application layer through communication layers, and the service object of these data---user also will conduct interviews to it through communication layers.Therefore, as the Internet of Things platform of whole system maincenter, should tackle the mass data that sensing layer is collected, can handle the visit of a large number of users to various application on the platform again, communication layers task therein is most important.Design perfect communication layers and can tackle the data and the access request of magnanimity easily, otherwise whole flat maybe be with catastrophic consequence takes place.
As a kind of important process communication mechanism, Socket is extensively applied to a series of C/S, and (client/server is Client/Server) in the communication scenes of pattern.In order to tackle the connection of enormous amount, server end need be introduced each connection of multithreading parallel processing.If each new connection is all handled by the new thread of dynamic creation, systematic function will greatly be weakened.So depositing some in advance arises at the historic moment with the thread pool technology of creating thread; Yet whether the size of thread pool fixes is a good problem to study all the time; The thread pool of static scale has been saved the expense of creating and destroying thread; But the linking number for ultra its capacity far away is difficult to reply, and the scale with the pond enlarges simply, and too much idle thread still can take no small system resource; Dynamically the thread pool of scale still need constantly generate new thread in the face of a large amount of the connection time, has greatly increased system loading.Therefore should analyze according to the difference of system's environment of living in, the thread pool of which kind of form and much capacity is adopted in decision.
Both at home and abroad about the research of dynamic thread pool, mainly concentrate on three aspects at present: some threads are created in batches in (1) when the burst connection is big, rather than come a request just to create a thread, but will control the Thread Count purpose upper limit in the pond; (2) number of optimization work thread, the number of users that the utilization Principle of Statistics is predicted peak period.This strategy is simple relatively, and reliability is higher; (3) at a server a plurality of thread pools are provided, adopt different thread pools to handle according to various tasks and priority.
Summary of the invention
Technical problem: the present invention has used thread pool technique construction socket server; Designed system's operation support scheme; And the working mechanism of thread pool analysed in depth, calculated the expense of dynamic thread pool when the request that faces above its capacity, the scheme that proposes to use Buffer Pool store excess connection request rather than dynamically generate additional thread; Then master-plan is optimized, tackles common emergency case.
Technical scheme: though dynamically thread pool has obtained extensive use, its limitation is the same with advantage obvious, and when a large amount of connection requests that surpass its capacity occur simultaneously, thread pool dynamically generates with the expense of destruction thread and will can not ignore.Therefore before the online Cheng Chi of socket server scheme that proposes, add Buffer Pool and received the excess thread, when idle thread occurring in the thread pool, from Buffer Pool, taken out new connection request.
The Socket server is used to accomplish two network communications between the program, and server must provide IP (procotol, Internet Protocol) address and the port numbers of oneself, and client connects to the request of the corresponding port of this address.Detailed process is:
Server end is set up the connection that ServerSocket monitors client; Connect after receiving request, and take out message and handle, the result after handling is returned; User end to server sends connection request, and the back that connects just receives message to the server transmission or from server.
1. communication module design
This part is the core of socket server, accomplishes the most important communication function of server.The communication module of server end is formed basic design scheme:
In order to satisfy the requirement of duplex communication, promptly server and client side's transceive data simultaneously needs to set up and sends; Receive two threads, what deposit in the thread pool is receiving thread, and this thread takes out the Socket connection after connecting; Receive message and be sent to the upper strata, set up simultaneously and send thread, monitor and send message queue (with the interface on upper strata through receiving message queue; Deposit the result that handle on the upper strata), take out result and transmission.Meanwhile, receiving thread is in blocked state, and client-requested is broken off, and then closes Socket and connects.In addition, too much for preventing linking number, will put into Buffer Pool above the connection of Thread Count in the pond, occur idle thread by the time and take out again.
2. thread pool design
Foundation and destruction that to define two main expenses in 1. threadings are threads, and the maintenance of thread.If first kind of expense is C1, second kind of expense is C2.
C1: the foundation of thread and the expense of destruction, major part are the time for the thread storage allocation;
C2: the expense of the maintenance of thread, the context switching time of thread;
N: thread pool size;
R: the Thread Count of current operation.
C1 in the actual conditions>>C2, the use that table 1 has contrasted thread pool is whether for the influence of systematic function.
Table 1 system introduces the expense contrast before and after the thread pool
|
The thread pool expense |
The expense that does not have thread pool |
Obtain the systematic function of lifting |
?0≤r≤n |
C2·n |
C1·r |
C1·r-C2·n |
?r>n |
C2·n+C1·(r-n) |
C1·r |
C1·n-C2·n |
Table 1 has been considered two kinds of situation.As the Thread Count of surviving in the pond (0≤r≤n), only limit to switch in the expense that system under the situation of thread pool is arranged, i.e. C2n during less than pond big or small between each thread.And thread created and destroyed in system need to each new connection when not having thread pool, and expense is C1r, and first kind of scheme will make systematic function promote C1r-C2n.
In second kind of situation, number of tasks has surpassed the maximum of thread pool, and this moment, thread pool was necessary for unnecessary task creation new thread, and expense is C2n+C1 (r-n), and does not have the expense of thread pool still to be C1r.Before a kind of scheme make systematic function promote C1n-C2n.
The key issue of first kind of scheme is the problem that is provided with of thread pool size.If Thread Count is too much in the pond; System need consume a large amount of processing and cache resources in order to safeguard these idle threads; Thread Count is crossed and is caused constantly dynamically generating new thread at least and destruction after task termination, and the cost of paying possibly surpass the resource that task itself is consumed.Down in the face of how confirming that best Thread Count (n) discusses.
Define the thread of surviving in the thread pool in 2. actual environments and constantly changing, establish the Thread Count of r for survival, f (r) is its distribution law, and the expectation of thread pool size n is:
If best thread pool size is n
*, its expectation is:
N represents the set of Thread Count in the thread pool,
expression E (n) the value upper bound.(1) is rewritten into following form, and p (r) is the probability density function of the Thread Count of surviving in the thread pool:
In order to obtain the maximum of E (n), differentiate is carried out in (3), get (4):
Rewrite (4), establishing and keeping active thread is ξ=C with the ratio of creating new thread
2/ C
1:
Because thread pool size is an integer, so should value confirm by following formula:
This formula shows n
*Relevant with ξ, when thread overhead in switching during much smaller than the expense of thread creation and destruction, the capacity of thread pool is bigger.(5) (6) also show n
*Relevant with system present load p (r).
A kind of Socket design and optimization scheme towards the Internet of Things platform of the present invention is in the communication layers of Internet of Things platform; Utilizing Buffer Pool, thread pool to set up multi-thread concurrent connects; A kind of efficient Socket server design scheme is proposed; In reply a large amount of user and request of data, reduce the use of overhead and buffer memory to greatest extent; The socket server that used the thread pool technique construction; Designed system's operation support scheme; And the working mechanism of thread pool analysed in depth, calculated the expense of dynamic thread pool when the request that faces above its capacity, the scheme that proposes to use Buffer Pool store excess connection request rather than dynamically generate additional thread; Then master-plan is optimized, tackles common emergency case.
Described use thread pool technique construction the socket server; Comprise communication module design and thread pool design; Though dynamically thread pool has obtained extensive use; Its limitation is the same with advantage obvious, and when a large amount of connection requests that surpass its capacity occur simultaneously, thread pool dynamically generates with the expense of destroying thread and will can not ignore.Therefore before the online Cheng Chi of socket server scheme that proposes, add Buffer Pool and received the excess thread, when idle thread occurring in the thread pool, from Buffer Pool, taken out new connection request.
The Socket server is used to accomplish two network communications between the program, and server must provide IP address and the port numbers of oneself, and client connects to the request of the corresponding port of this address.Detailed process is:
Server end is set up the connection that ServerSocket monitors client; Connect after receiving request, and take out message and handle, the result after handling is returned; User end to server sends connection request, and the back that connects just receives message to the server transmission or from server.
The communication module design is the core of socket server, accomplishes the most important communication function of server.The communication module of server end is formed basic design scheme:
In order to satisfy the requirement of duplex communication, promptly server and client side's transceive data simultaneously needs to set up and sends; Receive two threads, what deposit in the thread pool is receiving thread, and this thread takes out the Socket connection after connecting; Receive message and be sent to the upper strata, set up simultaneously and send thread, monitor and send message queue (with the interface on upper strata through receiving message queue; Deposit the result that handle on the upper strata), take out result and transmission.Meanwhile, receiving thread is in blocked state, and client-requested is broken off, and then closes Socket and connects.In addition, too much for preventing linking number, will put into Buffer Pool above the connection of Thread Count in the pond, occur idle thread by the time and take out again.
The thread pool design is confirming of thread pool size.Foundation and destruction that two main expenses in the thread pool design are threads, and the maintenance of thread.If first kind of expense is C1, second kind of expense is C2.The C1 major part is the time for the thread storage allocation, and C2 then is the context switching time of thread.C1 in the actual conditions>>C2.The thread of surviving in the thread pool in the actual environment is constantly changing, and establishes the Thread Count (be thread pool size) of r for survival, and f (r) is its distribution law, and the expectation of thread pool size n is:
If best thread pool size is n
*, its expectation is:
N represents the set of Thread Count in the thread pool,
expression E (n) the value upper bound.(1) is rewritten into following form, and p (r) is the probability density function of the Thread Count of surviving in the thread pool
In order to obtain the maximum of E (n), differentiate is carried out in (3), get (4):
Rewrite (4), establishing and keeping active thread is ξ=C with the ratio of creating new thread
2/ C
1:
Because thread pool size is an integer, so should value confirm by following formula:
This formula shows n
*Relevant with ξ, when thread overhead in switching during much smaller than the expense of thread creation and destruction, the capacity of thread pool is bigger.Formula (5) (6) also shows n
*Relevant with system present load p (r).
When the system load ability is assessed, suppose that p (r) evenly distributes, number of users is 4000, is about 400ms because system creation is destroyed thread again, and the context switching is about 20ms,
Then have
n
*=3600.
Described system operation support scheme is that agreement is made in system's use by oneself, and client and server transmits and receive data to wrap and communicates, and the packet size is for being about 100Bytes.Form with packet after the system handles message is returned message.Send and two threads of reception in order to reach duplexing, to set, and the transmission thread is the sub-thread of receiving thread.The message that server is received is put into the reception message queue and is transferred to the operation layer processing.When operation layer disposes, each thread removes to send message queue and takes out one's own result and send then.
Described common emergency case is 1) task 2 of mark different clients how) have when receiving data and block 3) Socket still carries out read-write operation 4 after closing) unexpected client breaks off four kinds of situation.
The task of described how mark different clients; Solution is: for this situation; Because each task all belongs to different client side; Therefore can be subordinated in the Socket object of this client and obtain its IP address and port numbers, set and deposit the IP of client and the Mark field of port numbers, be worth IP and port numbers for connection.In the While circulation, thread polling message formation (MessageQueue) element topmost at each task place just takes out message wherein in case find corresponding Mark.Otherwise thread suspension.
Have during described reception data that to block be in the whole system; Client is clicked the interface; Program is sent packet to server, accomplish function corresponding by server then, so the characteristics of task is low to the occupancy of CPU; But I/O (I/O, Input/Output) operation of blocking through regular meeting.If the user does not click the interface for a long time, the worker thread in the thread pool will be taken by this user all the time, can't carry out any task.If all threads all are in blocked state in the thread pool, then initiate task can't be processed.And receiving end when the transmitting terminal data volume is excessive possibly can't receive fully.
Solution is: definition nIdx; (nIdx is used to deposit the byte number of reading altogether to three variablees of nTotalLen and nReadLen; NTotalLen is that the byte number and the nReadLen that will read altogether are the byte numbers of reading in once circulating), the value of nTotalLen can be by the field decision in packet header.The While circulation lasts till that reading the inlet flow end jumps out.For fear of the mode of this unsound use CPU of busy waiting, second thread suspension that the while circulation is responsible for not having data to import waken up when input is arranged again here.
Still carry out read-write operation after described Socket closes, solution is: because closing of Socket controlled by receiving thread, therefore possibly before sending thread operation completion, just Socket closed by receiving thread.Set up informing mechanism; Let send and send a notice to receiving thread after thread execution finishes; Receiving thread is closed Socket can address this problem then, promptly adds protocol fields PacSeq (PacSeq is the numbering of the request package received), and for example receiving thread is received the bag of PacSeq=x; Send notice to receiving thread when sending after thread is beamed back the required result of this request package, receiving thread is receiving that this notice keeps the Socket connection status always.
Described unexpected client is broken off; Solution is: client was whenever sent a heartbeat packet at a distance from 5 minutes; Server end is created a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown; Receiving thread receives that heartbeat packet do not handle, and just sends notice and to daemon thread timer is reset, and continues then to keep being connected with client.If server was not received in 5 minutes, then daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads all are in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or then qualifying server disconnection unusually occurred.Resend connection request after the disconnection.
Beneficial effect: the present invention is directed to the massive demand that Internet of Things brings; Utilize socket to design server com-munication module; Furtherd investigate the influence that is provided with of thread pool to systematic function; Use buffer queue to handle the thread that exceeds the thread pool capacity, the problem that analytical system possibly run into also is optimized.At last systematic function is carried out emulation; The result shows that thread pool cooperates Buffer Pool can when facing excessive to a certain extent connection, reduce overhead greatly; The task little for expense should not adopt short connection mode; Though and the linking number that find to surpass the thread pool capacity uses Buffer Pool can reduce overhead when little, in case meet or exceed certain threshold value, the reaction time of system will sharply rise.
Description of drawings
Fig. 1 is that Socket sets up process,
Fig. 2 is a communication module operation flow graph,
Fig. 3 avoids Socket to close unusual flow chart,
Fig. 4 is the heartbeat inspecting flow chart,
Fig. 5 is the socket connection speed comparison diagram of experiment 1,
Fig. 6 is experiment 2 BT of system performance block diagrams,
Fig. 7 is experiment 2 DT of system performance block diagrams,
Fig. 8 is experiment 2 DQL of system performance block diagrams,
Fig. 9 is experiment 2 DBT of system performance block diagrams,
Figure 10 is the comparison diagram of experiment 3 SRTs.
Embodiment
System's operation supporting way
System uses and makes agreement by oneself, and client and server transmits and receive data to wrap and communicates, and the packet size is for being about 100Bytes.Form with packet after the system handles message is returned message.Send and two threads of reception in order to reach duplexing, to set, and the transmission thread is the sub-thread of receiving thread.The message that server is received is put into the reception message queue and is transferred to the operation layer processing.When operation layer disposes, each thread removes to send message queue and takes out one's own result and send then.Here four kinds of situation will occur, solution is following:
Situation one: the task of mark different clients how.
Solution: for this situation,, therefore can be subordinated in the Socket object of this client and obtain its IP address and port numbers, set the Mark field, be worth IP and port numbers for connecting because each task all belongs to different client side.In the While circulation, thread polling message formation (MessageQueue) element topmost at each task place just takes out message wherein in case find corresponding Mark.Otherwise thread suspension.
Situation two: have obstruction when receiving data
Solution: in the whole system, client is clicked the interface, and program is sent packet to server, accomplishes function corresponding by server then, so the characteristics of task are low to the occupancy of CPU, but the I/O operation of blocking through regular meeting.If the user does not click the interface for a long time, the worker thread in the thread pool will be taken by this user all the time, can't carry out any task.If all threads all are in blocked state in the thread pool, then initiate task can't be processed.And receiving end when the transmitting terminal data volume is excessive possibly can't receive fully.
Address this problem and can define nIdx, three variablees of nTotalLen and nReadLen are deposited the byte number of reading altogether respectively, byte number that will read altogether and the byte number read in the circulation once, and the value of nTotalLen can be by the field decision in packet header.The While circulation lasts till that reading the inlet flow end jumps out.For fear of the mode of this unsound use CPU of busy waiting, second thread suspension that the while circulation is responsible for not having data to import waken up when input is arranged again here.
Situation three: still carry out read-write operation after Socket closes
Solution:, therefore possibly before sending thread operation completion, just Socket closed by receiving thread because closing of Socket controlled by receiving thread.Set up informing mechanism, let send and send a notice to receiving thread after thread execution finishes, receiving thread is closed Socket can address this problem then.As shown in Figure 3; Add protocol fields PacSeq, the request package of receiving is numbered, for example receiving thread is received the bag of PacSeq=x; Send notice to receiving thread when sending after thread is beamed back the required result of this request package, receiving thread is receiving that this notice keeps the Socket connection status always.
Situation four: unexpected client is broken off
Solution: this problem can be solved through algorithm as shown in Figure 4: client was whenever sent a heartbeat packet at a distance from 5 minutes; Server end is created a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown; Receiving thread receives that heartbeat packet do not handle; Just send notice and timer is reset, continue then to keep being connected with client to daemon thread.If server was not received in 5 minutes, then daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads all are in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or then qualifying server disconnection unusually occurred.Resend connection request after the disconnection.
We use LoadRunner software to carry out emulation testing.LoadRunner is the load testing instrument of a kind of prognoses system behavior and performance.Software can generate Virtual User, with the mode simulates real real user's of Virtual User business operation behavior.Built a socket server in the experiment, basic configuration is double-core 2.4GHz, the 4GB internal memory.Database adopts Oracle 11G.
Experiment 1 has been investigated and has been set up the required time of socket connection; Experiment 2 is analyzed at linking number 4000, and the thread pool capacity is 3600 o'clock, the performance when system dynamics ground generates new thread for excess connects; Experiment 3 is introduced Buffer Pool on the basis of experiment 2, observe system performance changes then.
The test of experiment 1:socket connection speed
Socket need spend the more time when connecting, if server adopts short connection mode, then connection itself is taken time and may not be ignored.For checking socket connection speed, generate 100 Virtual User server is carried out connection request, client-side program carries out two actions altogether:
1) attempts connecting with server.
2) feedback information of packet of transmission and display server after the successful connection.Server end also carries out two actions: monitor the client connection request 1..2. beam back client data sent bag after the successful connection.
The ruuning situation of Virtual User in experiment is as shown in Figure 5.Fig. 5 shows the speed that the user connects, and the longitudinal axis is the number of Virtual User, and transverse axis is running time.The Virtual User quantity that the blue line representative is moving, the Virtual User quantity that the green line representative has been finished.Here find out that beginning to be connected to last user from first user is finished and has spent 5 seconds altogether, and most of user sent data since the 4th second after successful connection, that is to say to connect to have spent the time of a whole set of operation 4/5ths.
It is little and under the situation that similar tasks possibly frequently occur that this experiment is illustrated in the expense of task own; Server and client should not used short connection mode; Be to keep connecting a period of time after each is accomplished, with the overhead of avoiding frequent disconnection to connect and brought again.
Experiment 2: each performance parameter contrast test of server
After verifying that linking number surpasses the thread pool capacity; System dynamics generates the influence that new thread causes performance; Set up Buffer Pool at thread pool with monitoring between the accept method is connected, unnecessary connection is deposited in the pond, when thread pool has vacant thread, from the pond, take out to connect and move.
Investigate respectively and adding and do not adding under the situation of Buffer Pool, the performance when system faces the connection that surpasses the thread pool capacity.Carry out four experiments altogether, thread pool is set to 3600, and each linking number is made as 3600,3800,4000,4200 respectively.System generates new thread for the request dynamic that surpasses the thread pool capacity when not adding Buffer Pool.For internal memory and the hard disk situation of understanding server, observe the parameter in the table 2:
Table 2 server performance parameter
This experiment content is identical with experiment 1, but in order to strengthen server end pressure, whole process iteration is carried out, promptly break off behind the client complete operation, and then the request connection, carry out same operation.Whole process continues five minutes.Experimental result is as shown in Figure 6.Fig. 6 is system's performance of network throughput (BT) under these conditions;, linking number have or not Buffer Pool little when equaling the thread pool capacity to the network throughput influence; When linking number rises to 3800 owing to there is the system of Buffer Pool need not create thread in real time; Resource mainly is used on the transfer of data, so network throughput is higher than the system of no Buffer Pool.Both throughputs remain basically stable when linking number is raised to 4,000; And the network throughput that reaches no Buffer Pool system after 4200 connections nearly exceeds 25% than the network throughput that the Buffer Pool system is arranged, and treatment effeciency can't promote too much when multi-link facing in the system that Buffer Pool is described.
Fig. 7 is the variation of system disk operation conditions before and after Buffer Pool adds to Fig. 9 demonstration; Can find out that the DT, DQL, the DBT expense that add the preceding system of Buffer Pool increase along with the increase of linking number; Performance when connecting for 4200 has risen 26.6% respectively than initial 3600 connections; 20.3%, 45.9%.This mainly is because system is constantly generating in real time and destroying thread, and the total number of threads of operation is also more and more, has consumed a large amount of memory sources, and system has to use disk as virtual memory, and the burden of hard disk also increases thereupon.Review and add after the Buffer Pool; Though throughput of system is not fully up to expectations during in the face of 4200 connections; But the basic held stationary of overhead, this is to make the thread overhead in switching reduce because the Thread Count of operation reduces, and owing to will deposit Buffer Pool in above the request of thread pool capacity; Need not dynamic the generation and the destruction thread, significantly reduced system loading.
This shows; For surpassing the thread pool capacity but the few connection of quantity should put it into Buffer Pool rather than dynamically generate thread and handle; Otherwise can cause systematic function sharply to descend, because the expense of the establishment of thread and destruction context overhead in switching when safeguarding thread.Though increasing Thread Count rashly in order to increase linking number can the enhancement process ability, cause systematic function to descend probably.In whole process, should avoid operation, because should the slow running efficiency of system of the operation very big floor mop of meeting to hard disk as far as possible.
The experiment 3: system response time contrast test
Experiment 2 shows the connection that exceeds the thread pool capacity is placed in the cache pool, waits for that idle thread appears to handle can reduce overhead.But system is slower to new coupled reaction when linking number is excessive, and new user possibly need to wait for that the long period could successful connection, is the flex point of searching SRT; Repeat above-mentioned experiment content; Constantly increase the connection request number, in the observing system reaction time, the result is shown in figure 10.The concurrent connection number that Figure 10 transverse axis representative system is born, longitudinal axis representative system can be found out the connection request number at 3600 o'clock to the reaction time of connection request, have or not Buffer Pool little to the systematic function influence, because there is not this moment new thread to generate.At the request number is to have the SRT of Buffer Pool shorter at 3800 o'clock, creates the time of new thread and will lack because newly be connected the time ratio of waiting in the pond.But the connection request number reaches at 4000 o'clock, has the reaction time of the system of Buffer Pool sharply to increase to 497ms, and this growth is non-linear, promptly be not directly proportional with linking number, and the SRT that does not have a Buffer Pool does not almost change, and maintains 422ms.This is only can increase overhead because create new thread, and each task handling speed is not reduced.When linking number reached 4200, the SRT that Buffer Pool is arranged was 885ms, had been much higher than the SRT 442ms of no Buffer Pool.
Inventive point 1, use thread pool technique construction socket server
The communication module design
In order to satisfy the requirement of duplex communication, promptly server and client side's transceive data simultaneously needs to set up and sends; Receive two threads, what deposit in the thread pool of Fig. 2 is receiving thread, and this thread takes out the Socket connection after connecting; Receive message and be sent to the upper strata, set up simultaneously and send thread, monitor and send message queue (with the interface on upper strata through receiving message queue; Deposit the result that handle on the upper strata), take out result and transmission.Meanwhile, receiving thread is in blocked state, and client-requested is broken off, and then closes Socket and connects.In addition, too much for preventing linking number, will put into Buffer Pool above the connection of Thread Count in the pond, occur idle thread by the time and take out again.
The thread pool design
Foundation and destruction that two main expenses in the thread pool design are threads, and the maintenance of thread.If first kind of expense is C1, second kind of expense is C2.The C1 major part is the time for the thread storage allocation, and C2 then is the context switching time of thread.C1 in the actual conditions>>C2.The thread of surviving in the thread pool in the actual environment is constantly changing, and establishes the Thread Count (be thread pool size) of r for survival, and f (r) is its distribution law, and the expectation of thread pool size n is:
If best thread pool size is n
*, its expectation is:
N represents the set of Thread Count in the thread pool,
expression E (n) the value upper bound.(1) is rewritten into following form, and p (r) is the probability density function of the Thread Count of surviving in the thread pool
In order to obtain the maximum of E (n), differentiate is carried out in (3), get (4):
Rewrite (4), establishing and keeping active thread is ξ=C with the ratio of creating new thread
2/ C
1:
Because thread pool size is an integer, so should value confirm by following formula:
This formula shows n
*Relevant with ξ, when thread overhead in switching during much smaller than the expense of thread creation and destruction, the capacity of thread pool is bigger.(5) (6) also show n
*Relevant with system present load p (r).
When the system load ability is assessed, suppose that p (r) evenly distributes, number of users is 4000, is about 400ms because system creation is destroyed thread again, and the context switching is about 20ms,
Then have
n
*=3600.
Inventive point 2, how to solve the problem of the task of mark different clients
For this situation, therefore because each task all belongs to different client side, can be subordinated in the Socket object of this client and obtain its IP address and port numbers, set the Mark field, be worth IP and port numbers for connecting.In the While circulation, the thread polling message formation MessageQueue element topmost at each task place just takes out message wherein in case find corresponding Mark.Otherwise thread suspension.
Inventive point 3, solution have the problem of obstruction when receiving data
In the whole system, client is clicked the interface, and program is sent packet to server, accomplishes function corresponding by server then, so the characteristics of task are low to the occupancy of CPU, but the I/O operation of blocking through regular meeting.If the user does not click the interface for a long time, the worker thread in the thread pool will be taken by this user all the time, can't carry out any task.If all threads all are in blocked state in the thread pool, then initiate task can't be processed.And receiving end when the transmitting terminal data volume is excessive possibly can't receive fully.
Address this problem and can define nIdx, three variablees of nTotalLen and nReadLen are deposited the byte number of reading altogether respectively, byte number that will read altogether and the byte number read in the circulation once, and the value of nTotalLen can be by the field decision in packet header.The While circulation lasts till that reading the inlet flow end jumps out.For fear of the mode of this unsound use CPU of busy waiting, second thread suspension that the while circulation is responsible for not having data to import waken up when input is arranged again here.
After closing, inventive point 4, solution Socket still carry out the problem of read-write operation
Because closing of Socket controlled by receiving thread, therefore possibly before sending thread operation completion, just Socket closed by receiving thread.Set up informing mechanism, let send and send a notice to receiving thread after thread execution finishes, receiving thread is closed Socket can address this problem then.As shown in Figure 3; Add protocol fields PacSeq, the request package of receiving is numbered, for example receiving thread is received the bag of PacSeq=x; Send notice to receiving thread when sending after thread is beamed back the required result of this request package, receiving thread is receiving that this notice keeps the Socket connection status always.
The problem that inventive point 5, solution unexpected client are broken off
This problem can be solved through algorithm as shown in Figure 4: client was whenever sent a heartbeat packet at a distance from 5 minutes; Server end is created a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown; Receiving thread receives that heartbeat packet do not handle; Just send notice and timer is reset, continue then to keep being connected with client to daemon thread.If server was not received in 5 minutes, then daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads all are in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or then qualifying server disconnection unusually occurred.Resend connection request after the disconnection.