[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN102546437A - Internet of things platform-oriented socket implementation method - Google Patents

Internet of things platform-oriented socket implementation method Download PDF

Info

Publication number
CN102546437A
CN102546437A CN201210038597XA CN201210038597A CN102546437A CN 102546437 A CN102546437 A CN 102546437A CN 201210038597X A CN201210038597X A CN 201210038597XA CN 201210038597 A CN201210038597 A CN 201210038597A CN 102546437 A CN102546437 A CN 102546437A
Authority
CN
China
Prior art keywords
thread
socket
pool
server
thread pool
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201210038597XA
Other languages
Chinese (zh)
Other versions
CN102546437B (en
Inventor
王堃
于悦
暴建民
胡海峰
郭篁
房硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wang Kun
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201210038597.XA priority Critical patent/CN102546437B/en
Publication of CN102546437A publication Critical patent/CN102546437A/en
Application granted granted Critical
Publication of CN102546437B publication Critical patent/CN102546437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses an Internet of things platform-oriented socket implementation method. An Internet of things application-oriented server is required to provide high-efficiency and high-reliability service for a great amount of concurrent connection, and a communication layer is required to reduce a system load as much as possible. Therefore, multi-thread concurrent connection is established by utilizing a buffer pool and a thread pool in the communication layer of an Internet of things platform, and a high-efficiency Socket server design scheme is provided for response to a great number of users and data requests and maximal reduction in system overhead and the use of a cache. A simulation result shows that by a Socket server introducing the thread pool and the buffer pool, the creation and destruction time of threads and the blocking time of clients can be shortened, and the dynamic generation and destruction processes of the threads can be reduced, so that the processing capability of the server is enhanced.

Description

A kind of socket implementation method towards the Internet of Things platform
Technical field
The present invention is the design and optimization scheme of the communication module of a kind of Socket based on the Internet of Things platform (socket) server, belongs to the communication technical field of Internet of Things.
Background technology
Along with the continuous development of technology of Internet of things, transducer, RFID (radio frequency identification, Radio Frequency Identification) have obtained using widely.The data of being collected by sensing layer will be transferred to application layer through communication layers, and the service object of these data---user also will conduct interviews to it through communication layers.Therefore, as the Internet of Things platform of whole system maincenter, should tackle the mass data that sensing layer is collected, can handle the visit of a large number of users to various application on the platform again, communication layers task therein is most important.Design perfect communication layers and can tackle the data and the access request of magnanimity easily, otherwise whole flat maybe be with catastrophic consequence takes place.
As a kind of important process communication mechanism, Socket is extensively applied to a series of C/S, and (client/server is Client/Server) in the communication scenes of pattern.In order to tackle the connection of enormous amount, server end need be introduced each connection of multithreading parallel processing.If each new connection is all handled by the new thread of dynamic creation, systematic function will greatly be weakened.So depositing some in advance arises at the historic moment with the thread pool technology of creating thread; Yet whether the size of thread pool fixes is a good problem to study all the time; The thread pool of static scale has been saved the expense of creating and destroying thread; But the linking number for ultra its capacity far away is difficult to reply, and the scale with the pond enlarges simply, and too much idle thread still can take no small system resource; Dynamically the thread pool of scale still need constantly generate new thread in the face of a large amount of the connection time, has greatly increased system loading.Therefore should analyze according to the difference of system's environment of living in, the thread pool of which kind of form and much capacity is adopted in decision.
Both at home and abroad about the research of dynamic thread pool, mainly concentrate on three aspects at present: some threads are created in batches in (1) when the burst connection is big, rather than come a request just to create a thread, but will control the Thread Count purpose upper limit in the pond; (2) number of optimization work thread, the number of users that the utilization Principle of Statistics is predicted peak period.This strategy is simple relatively, and reliability is higher; (3) at a server a plurality of thread pools are provided, adopt different thread pools to handle according to various tasks and priority.
Summary of the invention
Technical problem: the present invention has used thread pool technique construction socket server; Designed system's operation support scheme; And the working mechanism of thread pool analysed in depth, calculated the expense of dynamic thread pool when the request that faces above its capacity, the scheme that proposes to use Buffer Pool store excess connection request rather than dynamically generate additional thread; Then master-plan is optimized, tackles common emergency case.
Technical scheme: though dynamically thread pool has obtained extensive use, its limitation is the same with advantage obvious, and when a large amount of connection requests that surpass its capacity occur simultaneously, thread pool dynamically generates with the expense of destruction thread and will can not ignore.Therefore before the online Cheng Chi of socket server scheme that proposes, add Buffer Pool and received the excess thread, when idle thread occurring in the thread pool, from Buffer Pool, taken out new connection request.
The Socket server is used to accomplish two network communications between the program, and server must provide IP (procotol, Internet Protocol) address and the port numbers of oneself, and client connects to the request of the corresponding port of this address.Detailed process is:
Server end is set up the connection that ServerSocket monitors client; Connect after receiving request, and take out message and handle, the result after handling is returned; User end to server sends connection request, and the back that connects just receives message to the server transmission or from server.
1. communication module design
This part is the core of socket server, accomplishes the most important communication function of server.The communication module of server end is formed basic design scheme:
In order to satisfy the requirement of duplex communication, promptly server and client side's transceive data simultaneously needs to set up and sends; Receive two threads, what deposit in the thread pool is receiving thread, and this thread takes out the Socket connection after connecting; Receive message and be sent to the upper strata, set up simultaneously and send thread, monitor and send message queue (with the interface on upper strata through receiving message queue; Deposit the result that handle on the upper strata), take out result and transmission.Meanwhile, receiving thread is in blocked state, and client-requested is broken off, and then closes Socket and connects.In addition, too much for preventing linking number, will put into Buffer Pool above the connection of Thread Count in the pond, occur idle thread by the time and take out again.
2. thread pool design
Foundation and destruction that to define two main expenses in 1. threadings are threads, and the maintenance of thread.If first kind of expense is C1, second kind of expense is C2.
C1: the foundation of thread and the expense of destruction, major part are the time for the thread storage allocation;
C2: the expense of the maintenance of thread, the context switching time of thread;
N: thread pool size;
R: the Thread Count of current operation.
C1 in the actual conditions>>C2, the use that table 1 has contrasted thread pool is whether for the influence of systematic function.
Table 1 system introduces the expense contrast before and after the thread pool
The thread pool expense The expense that does not have thread pool Obtain the systematic function of lifting
?0≤r≤n C2·n C1·r C1·r-C2·n
?r>n C2·n+C1·(r-n) C1·r C1·n-C2·n
Table 1 has been considered two kinds of situation.As the Thread Count of surviving in the pond (0≤r≤n), only limit to switch in the expense that system under the situation of thread pool is arranged, i.e. C2n during less than pond big or small between each thread.And thread created and destroyed in system need to each new connection when not having thread pool, and expense is C1r, and first kind of scheme will make systematic function promote C1r-C2n.
In second kind of situation, number of tasks has surpassed the maximum of thread pool, and this moment, thread pool was necessary for unnecessary task creation new thread, and expense is C2n+C1 (r-n), and does not have the expense of thread pool still to be C1r.Before a kind of scheme make systematic function promote C1n-C2n.
The key issue of first kind of scheme is the problem that is provided with of thread pool size.If Thread Count is too much in the pond; System need consume a large amount of processing and cache resources in order to safeguard these idle threads; Thread Count is crossed and is caused constantly dynamically generating new thread at least and destruction after task termination, and the cost of paying possibly surpass the resource that task itself is consumed.Down in the face of how confirming that best Thread Count (n) discusses.
Define the thread of surviving in the thread pool in 2. actual environments and constantly changing, establish the Thread Count of r for survival, f (r) is its distribution law, and the expectation of thread pool size n is:
E ( n ) = Σ r = 0 n ( C 1 · r - C 2 · n ) · f ( r ) + Σ r = n + 1 ∞ ( C 1 · n - C 2 · n ) · f ( r ) - - - ( 1 )
If best thread pool size is n *, its expectation is:
E ( n * ) = sup n ∈ N E ( n ) - - - ( 2 )
N represents the set of Thread Count in the thread pool,
Figure BDA0000136962110000023
expression E (n) the value upper bound.(1) is rewritten into following form, and p (r) is the probability density function of the Thread Count of surviving in the thread pool:
E ( n ) = ∫ 0 n ( C 1 · r - C 2 · n ) · p ( r ) dr + ∫ n ∞ ( C 1 · n - C 2 · n ) · p ( r ) dr - - - ( 3 )
In order to obtain the maximum of E (n), differentiate is carried out in (3), get (4):
dE dn = - C 2 + C 1 · ∫ 0 ∞ p ( r ) dr = 0 - - - ( 4 )
Rewrite (4), establishing and keeping active thread is ξ=C with the ratio of creating new thread 2/ C 1:
∫ n * ∞ p ( r ) dr = ξ ∫ 0 n * p ( r ) dr ≤ 1 - ξ - - - ( 5 )
Because thread pool size is an integer, so should value confirm by following formula:
∫ 0 n * p ( r ) dr ≤ 1 - ξ ∫ 0 n * + 1 p ( r ) dr > 1 - ξ - - - ( 6 )
This formula shows n *Relevant with ξ, when thread overhead in switching during much smaller than the expense of thread creation and destruction, the capacity of thread pool is bigger.(5) (6) also show n *Relevant with system present load p (r).
A kind of Socket design and optimization scheme towards the Internet of Things platform of the present invention is in the communication layers of Internet of Things platform; Utilizing Buffer Pool, thread pool to set up multi-thread concurrent connects; A kind of efficient Socket server design scheme is proposed; In reply a large amount of user and request of data, reduce the use of overhead and buffer memory to greatest extent; The socket server that used the thread pool technique construction; Designed system's operation support scheme; And the working mechanism of thread pool analysed in depth, calculated the expense of dynamic thread pool when the request that faces above its capacity, the scheme that proposes to use Buffer Pool store excess connection request rather than dynamically generate additional thread; Then master-plan is optimized, tackles common emergency case.
Described use thread pool technique construction the socket server; Comprise communication module design and thread pool design; Though dynamically thread pool has obtained extensive use; Its limitation is the same with advantage obvious, and when a large amount of connection requests that surpass its capacity occur simultaneously, thread pool dynamically generates with the expense of destroying thread and will can not ignore.Therefore before the online Cheng Chi of socket server scheme that proposes, add Buffer Pool and received the excess thread, when idle thread occurring in the thread pool, from Buffer Pool, taken out new connection request.
The Socket server is used to accomplish two network communications between the program, and server must provide IP address and the port numbers of oneself, and client connects to the request of the corresponding port of this address.Detailed process is:
Server end is set up the connection that ServerSocket monitors client; Connect after receiving request, and take out message and handle, the result after handling is returned; User end to server sends connection request, and the back that connects just receives message to the server transmission or from server.
The communication module design is the core of socket server, accomplishes the most important communication function of server.The communication module of server end is formed basic design scheme:
In order to satisfy the requirement of duplex communication, promptly server and client side's transceive data simultaneously needs to set up and sends; Receive two threads, what deposit in the thread pool is receiving thread, and this thread takes out the Socket connection after connecting; Receive message and be sent to the upper strata, set up simultaneously and send thread, monitor and send message queue (with the interface on upper strata through receiving message queue; Deposit the result that handle on the upper strata), take out result and transmission.Meanwhile, receiving thread is in blocked state, and client-requested is broken off, and then closes Socket and connects.In addition, too much for preventing linking number, will put into Buffer Pool above the connection of Thread Count in the pond, occur idle thread by the time and take out again.
The thread pool design is confirming of thread pool size.Foundation and destruction that two main expenses in the thread pool design are threads, and the maintenance of thread.If first kind of expense is C1, second kind of expense is C2.The C1 major part is the time for the thread storage allocation, and C2 then is the context switching time of thread.C1 in the actual conditions>>C2.The thread of surviving in the thread pool in the actual environment is constantly changing, and establishes the Thread Count (be thread pool size) of r for survival, and f (r) is its distribution law, and the expectation of thread pool size n is:
E ( n ) = Σ r = 0 n ( C 1 · r - C 2 · n ) · f ( r ) + Σ r = n + 1 ∞ ( C 1 · n - C 2 · n ) · f ( r ) - - - ( 1 )
If best thread pool size is n *, its expectation is:
E ( n * ) = sup n ∈ N E ( n ) - - - ( 2 )
N represents the set of Thread Count in the thread pool,
Figure BDA0000136962110000043
expression E (n) the value upper bound.(1) is rewritten into following form, and p (r) is the probability density function of the Thread Count of surviving in the thread pool
E ( n ) = ∫ 0 n ( C 1 · r - C 2 · n ) · p ( r ) dr + ∫ n ∞ ( C 1 · n - C 2 · n ) · p ( r ) dr - - - ( 3 )
In order to obtain the maximum of E (n), differentiate is carried out in (3), get (4):
dE dn = - C 2 + C 1 · ∫ 0 ∞ p ( r ) dr = 0 - - - ( 4 )
Rewrite (4), establishing and keeping active thread is ξ=C with the ratio of creating new thread 2/ C 1:
∫ n * ∞ p ( r ) dr = ξ ∫ 0 n * p ( r ) dr ≤ 1 - ξ - - - ( 5 )
Because thread pool size is an integer, so should value confirm by following formula:
∫ 0 n * p ( r ) dr ≤ 1 - ξ ∫ 0 n * + 1 p ( r ) dr > 1 - ξ - - - ( 6 )
This formula shows n *Relevant with ξ, when thread overhead in switching during much smaller than the expense of thread creation and destruction, the capacity of thread pool is bigger.Formula (5) (6) also shows n *Relevant with system present load p (r).
When the system load ability is assessed, suppose that p (r) evenly distributes, number of users is 4000, is about 400ms because system creation is destroyed thread again, and the context switching is about 20ms,
Figure BDA00001369621100000410
Then have
Figure BDA00001369621100000411
n *=3600.
Described system operation support scheme is that agreement is made in system's use by oneself, and client and server transmits and receive data to wrap and communicates, and the packet size is for being about 100Bytes.Form with packet after the system handles message is returned message.Send and two threads of reception in order to reach duplexing, to set, and the transmission thread is the sub-thread of receiving thread.The message that server is received is put into the reception message queue and is transferred to the operation layer processing.When operation layer disposes, each thread removes to send message queue and takes out one's own result and send then.
Described common emergency case is 1) task 2 of mark different clients how) have when receiving data and block 3) Socket still carries out read-write operation 4 after closing) unexpected client breaks off four kinds of situation.
The task of described how mark different clients; Solution is: for this situation; Because each task all belongs to different client side; Therefore can be subordinated in the Socket object of this client and obtain its IP address and port numbers, set and deposit the IP of client and the Mark field of port numbers, be worth IP and port numbers for connection.In the While circulation, thread polling message formation (MessageQueue) element topmost at each task place just takes out message wherein in case find corresponding Mark.Otherwise thread suspension.
Have during described reception data that to block be in the whole system; Client is clicked the interface; Program is sent packet to server, accomplish function corresponding by server then, so the characteristics of task is low to the occupancy of CPU; But I/O (I/O, Input/Output) operation of blocking through regular meeting.If the user does not click the interface for a long time, the worker thread in the thread pool will be taken by this user all the time, can't carry out any task.If all threads all are in blocked state in the thread pool, then initiate task can't be processed.And receiving end when the transmitting terminal data volume is excessive possibly can't receive fully.
Solution is: definition nIdx; (nIdx is used to deposit the byte number of reading altogether to three variablees of nTotalLen and nReadLen; NTotalLen is that the byte number and the nReadLen that will read altogether are the byte numbers of reading in once circulating), the value of nTotalLen can be by the field decision in packet header.The While circulation lasts till that reading the inlet flow end jumps out.For fear of the mode of this unsound use CPU of busy waiting, second thread suspension that the while circulation is responsible for not having data to import waken up when input is arranged again here.
Still carry out read-write operation after described Socket closes, solution is: because closing of Socket controlled by receiving thread, therefore possibly before sending thread operation completion, just Socket closed by receiving thread.Set up informing mechanism; Let send and send a notice to receiving thread after thread execution finishes; Receiving thread is closed Socket can address this problem then, promptly adds protocol fields PacSeq (PacSeq is the numbering of the request package received), and for example receiving thread is received the bag of PacSeq=x; Send notice to receiving thread when sending after thread is beamed back the required result of this request package, receiving thread is receiving that this notice keeps the Socket connection status always.
Described unexpected client is broken off; Solution is: client was whenever sent a heartbeat packet at a distance from 5 minutes; Server end is created a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown; Receiving thread receives that heartbeat packet do not handle, and just sends notice and to daemon thread timer is reset, and continues then to keep being connected with client.If server was not received in 5 minutes, then daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads all are in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or then qualifying server disconnection unusually occurred.Resend connection request after the disconnection.
Beneficial effect: the present invention is directed to the massive demand that Internet of Things brings; Utilize socket to design server com-munication module; Furtherd investigate the influence that is provided with of thread pool to systematic function; Use buffer queue to handle the thread that exceeds the thread pool capacity, the problem that analytical system possibly run into also is optimized.At last systematic function is carried out emulation; The result shows that thread pool cooperates Buffer Pool can when facing excessive to a certain extent connection, reduce overhead greatly; The task little for expense should not adopt short connection mode; Though and the linking number that find to surpass the thread pool capacity uses Buffer Pool can reduce overhead when little, in case meet or exceed certain threshold value, the reaction time of system will sharply rise.
Description of drawings
Fig. 1 is that Socket sets up process,
Fig. 2 is a communication module operation flow graph,
Fig. 3 avoids Socket to close unusual flow chart,
Fig. 4 is the heartbeat inspecting flow chart,
Fig. 5 is the socket connection speed comparison diagram of experiment 1,
Fig. 6 is experiment 2 BT of system performance block diagrams,
Fig. 7 is experiment 2 DT of system performance block diagrams,
Fig. 8 is experiment 2 DQL of system performance block diagrams,
Fig. 9 is experiment 2 DBT of system performance block diagrams,
Figure 10 is the comparison diagram of experiment 3 SRTs.
Embodiment
System's operation supporting way
System uses and makes agreement by oneself, and client and server transmits and receive data to wrap and communicates, and the packet size is for being about 100Bytes.Form with packet after the system handles message is returned message.Send and two threads of reception in order to reach duplexing, to set, and the transmission thread is the sub-thread of receiving thread.The message that server is received is put into the reception message queue and is transferred to the operation layer processing.When operation layer disposes, each thread removes to send message queue and takes out one's own result and send then.Here four kinds of situation will occur, solution is following:
Situation one: the task of mark different clients how.
Solution: for this situation,, therefore can be subordinated in the Socket object of this client and obtain its IP address and port numbers, set the Mark field, be worth IP and port numbers for connecting because each task all belongs to different client side.In the While circulation, thread polling message formation (MessageQueue) element topmost at each task place just takes out message wherein in case find corresponding Mark.Otherwise thread suspension.
Situation two: have obstruction when receiving data
Solution: in the whole system, client is clicked the interface, and program is sent packet to server, accomplishes function corresponding by server then, so the characteristics of task are low to the occupancy of CPU, but the I/O operation of blocking through regular meeting.If the user does not click the interface for a long time, the worker thread in the thread pool will be taken by this user all the time, can't carry out any task.If all threads all are in blocked state in the thread pool, then initiate task can't be processed.And receiving end when the transmitting terminal data volume is excessive possibly can't receive fully.
Address this problem and can define nIdx, three variablees of nTotalLen and nReadLen are deposited the byte number of reading altogether respectively, byte number that will read altogether and the byte number read in the circulation once, and the value of nTotalLen can be by the field decision in packet header.The While circulation lasts till that reading the inlet flow end jumps out.For fear of the mode of this unsound use CPU of busy waiting, second thread suspension that the while circulation is responsible for not having data to import waken up when input is arranged again here.
Situation three: still carry out read-write operation after Socket closes
Solution:, therefore possibly before sending thread operation completion, just Socket closed by receiving thread because closing of Socket controlled by receiving thread.Set up informing mechanism, let send and send a notice to receiving thread after thread execution finishes, receiving thread is closed Socket can address this problem then.As shown in Figure 3; Add protocol fields PacSeq, the request package of receiving is numbered, for example receiving thread is received the bag of PacSeq=x; Send notice to receiving thread when sending after thread is beamed back the required result of this request package, receiving thread is receiving that this notice keeps the Socket connection status always.
Situation four: unexpected client is broken off
Solution: this problem can be solved through algorithm as shown in Figure 4: client was whenever sent a heartbeat packet at a distance from 5 minutes; Server end is created a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown; Receiving thread receives that heartbeat packet do not handle; Just send notice and timer is reset, continue then to keep being connected with client to daemon thread.If server was not received in 5 minutes, then daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads all are in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or then qualifying server disconnection unusually occurred.Resend connection request after the disconnection.
We use LoadRunner software to carry out emulation testing.LoadRunner is the load testing instrument of a kind of prognoses system behavior and performance.Software can generate Virtual User, with the mode simulates real real user's of Virtual User business operation behavior.Built a socket server in the experiment, basic configuration is double-core 2.4GHz, the 4GB internal memory.Database adopts Oracle 11G.
Experiment 1 has been investigated and has been set up the required time of socket connection; Experiment 2 is analyzed at linking number 4000, and the thread pool capacity is 3600 o'clock, the performance when system dynamics ground generates new thread for excess connects; Experiment 3 is introduced Buffer Pool on the basis of experiment 2, observe system performance changes then.
The test of experiment 1:socket connection speed
Socket need spend the more time when connecting, if server adopts short connection mode, then connection itself is taken time and may not be ignored.For checking socket connection speed, generate 100 Virtual User server is carried out connection request, client-side program carries out two actions altogether:
1) attempts connecting with server.
2) feedback information of packet of transmission and display server after the successful connection.Server end also carries out two actions: monitor the client connection request 1..2. beam back client data sent bag after the successful connection.
The ruuning situation of Virtual User in experiment is as shown in Figure 5.Fig. 5 shows the speed that the user connects, and the longitudinal axis is the number of Virtual User, and transverse axis is running time.The Virtual User quantity that the blue line representative is moving, the Virtual User quantity that the green line representative has been finished.Here find out that beginning to be connected to last user from first user is finished and has spent 5 seconds altogether, and most of user sent data since the 4th second after successful connection, that is to say to connect to have spent the time of a whole set of operation 4/5ths.
It is little and under the situation that similar tasks possibly frequently occur that this experiment is illustrated in the expense of task own; Server and client should not used short connection mode; Be to keep connecting a period of time after each is accomplished, with the overhead of avoiding frequent disconnection to connect and brought again.
Experiment 2: each performance parameter contrast test of server
After verifying that linking number surpasses the thread pool capacity; System dynamics generates the influence that new thread causes performance; Set up Buffer Pool at thread pool with monitoring between the accept method is connected, unnecessary connection is deposited in the pond, when thread pool has vacant thread, from the pond, take out to connect and move.
Investigate respectively and adding and do not adding under the situation of Buffer Pool, the performance when system faces the connection that surpasses the thread pool capacity.Carry out four experiments altogether, thread pool is set to 3600, and each linking number is made as 3600,3800,4000,4200 respectively.System generates new thread for the request dynamic that surpasses the thread pool capacity when not adding Buffer Pool.For internal memory and the hard disk situation of understanding server, observe the parameter in the table 2:
Table 2 server performance parameter
Figure BDA0000136962110000081
This experiment content is identical with experiment 1, but in order to strengthen server end pressure, whole process iteration is carried out, promptly break off behind the client complete operation, and then the request connection, carry out same operation.Whole process continues five minutes.Experimental result is as shown in Figure 6.Fig. 6 is system's performance of network throughput (BT) under these conditions;, linking number have or not Buffer Pool little when equaling the thread pool capacity to the network throughput influence; When linking number rises to 3800 owing to there is the system of Buffer Pool need not create thread in real time; Resource mainly is used on the transfer of data, so network throughput is higher than the system of no Buffer Pool.Both throughputs remain basically stable when linking number is raised to 4,000; And the network throughput that reaches no Buffer Pool system after 4200 connections nearly exceeds 25% than the network throughput that the Buffer Pool system is arranged, and treatment effeciency can't promote too much when multi-link facing in the system that Buffer Pool is described.
Fig. 7 is the variation of system disk operation conditions before and after Buffer Pool adds to Fig. 9 demonstration; Can find out that the DT, DQL, the DBT expense that add the preceding system of Buffer Pool increase along with the increase of linking number; Performance when connecting for 4200 has risen 26.6% respectively than initial 3600 connections; 20.3%, 45.9%.This mainly is because system is constantly generating in real time and destroying thread, and the total number of threads of operation is also more and more, has consumed a large amount of memory sources, and system has to use disk as virtual memory, and the burden of hard disk also increases thereupon.Review and add after the Buffer Pool; Though throughput of system is not fully up to expectations during in the face of 4200 connections; But the basic held stationary of overhead, this is to make the thread overhead in switching reduce because the Thread Count of operation reduces, and owing to will deposit Buffer Pool in above the request of thread pool capacity; Need not dynamic the generation and the destruction thread, significantly reduced system loading.
This shows; For surpassing the thread pool capacity but the few connection of quantity should put it into Buffer Pool rather than dynamically generate thread and handle; Otherwise can cause systematic function sharply to descend, because the expense of the establishment of thread and destruction context overhead in switching when safeguarding thread.Though increasing Thread Count rashly in order to increase linking number can the enhancement process ability, cause systematic function to descend probably.In whole process, should avoid operation, because should the slow running efficiency of system of the operation very big floor mop of meeting to hard disk as far as possible.
The experiment 3: system response time contrast test
Experiment 2 shows the connection that exceeds the thread pool capacity is placed in the cache pool, waits for that idle thread appears to handle can reduce overhead.But system is slower to new coupled reaction when linking number is excessive, and new user possibly need to wait for that the long period could successful connection, is the flex point of searching SRT; Repeat above-mentioned experiment content; Constantly increase the connection request number, in the observing system reaction time, the result is shown in figure 10.The concurrent connection number that Figure 10 transverse axis representative system is born, longitudinal axis representative system can be found out the connection request number at 3600 o'clock to the reaction time of connection request, have or not Buffer Pool little to the systematic function influence, because there is not this moment new thread to generate.At the request number is to have the SRT of Buffer Pool shorter at 3800 o'clock, creates the time of new thread and will lack because newly be connected the time ratio of waiting in the pond.But the connection request number reaches at 4000 o'clock, has the reaction time of the system of Buffer Pool sharply to increase to 497ms, and this growth is non-linear, promptly be not directly proportional with linking number, and the SRT that does not have a Buffer Pool does not almost change, and maintains 422ms.This is only can increase overhead because create new thread, and each task handling speed is not reduced.When linking number reached 4200, the SRT that Buffer Pool is arranged was 885ms, had been much higher than the SRT 442ms of no Buffer Pool.
Inventive point 1, use thread pool technique construction socket server
The communication module design
In order to satisfy the requirement of duplex communication, promptly server and client side's transceive data simultaneously needs to set up and sends; Receive two threads, what deposit in the thread pool of Fig. 2 is receiving thread, and this thread takes out the Socket connection after connecting; Receive message and be sent to the upper strata, set up simultaneously and send thread, monitor and send message queue (with the interface on upper strata through receiving message queue; Deposit the result that handle on the upper strata), take out result and transmission.Meanwhile, receiving thread is in blocked state, and client-requested is broken off, and then closes Socket and connects.In addition, too much for preventing linking number, will put into Buffer Pool above the connection of Thread Count in the pond, occur idle thread by the time and take out again.
The thread pool design
Foundation and destruction that two main expenses in the thread pool design are threads, and the maintenance of thread.If first kind of expense is C1, second kind of expense is C2.The C1 major part is the time for the thread storage allocation, and C2 then is the context switching time of thread.C1 in the actual conditions>>C2.The thread of surviving in the thread pool in the actual environment is constantly changing, and establishes the Thread Count (be thread pool size) of r for survival, and f (r) is its distribution law, and the expectation of thread pool size n is:
E ( n ) = Σ r = 0 n ( C 1 · r - C 2 · n ) · f ( r ) + Σ r = n + 1 ∞ ( C 1 · n - C 2 · n ) · f ( r ) - - - ( 1 )
If best thread pool size is n *, its expectation is:
E ( n * ) = sup n ∈ N E ( n ) - - - ( 2 )
N represents the set of Thread Count in the thread pool,
Figure BDA0000136962110000093
expression E (n) the value upper bound.(1) is rewritten into following form, and p (r) is the probability density function of the Thread Count of surviving in the thread pool
E ( n ) = ∫ 0 n ( C 1 · r - C 2 · n ) · p ( r ) dr + ∫ n ∞ ( C 1 · n - C 2 · n ) · p ( r ) dr - - - ( 3 )
In order to obtain the maximum of E (n), differentiate is carried out in (3), get (4):
dE dn = - C 2 + C 1 · ∫ 0 ∞ p ( r ) dr = 0 - - - ( 4 )
Rewrite (4), establishing and keeping active thread is ξ=C with the ratio of creating new thread 2/ C 1:
∫ n * ∞ p ( r ) dr = ξ ∫ 0 n * p ( r ) dr ≤ 1 - ξ - - - ( 5 )
Because thread pool size is an integer, so should value confirm by following formula:
∫ 0 n * p ( r ) dr ≤ 1 - ξ ∫ 0 n * + 1 p ( r ) dr > 1 - ξ - - - ( 6 )
This formula shows n *Relevant with ξ, when thread overhead in switching during much smaller than the expense of thread creation and destruction, the capacity of thread pool is bigger.(5) (6) also show n *Relevant with system present load p (r).
When the system load ability is assessed, suppose that p (r) evenly distributes, number of users is 4000, is about 400ms because system creation is destroyed thread again, and the context switching is about 20ms,
Figure BDA0000136962110000101
Then have
Figure BDA0000136962110000102
n *=3600.
Inventive point 2, how to solve the problem of the task of mark different clients
For this situation, therefore because each task all belongs to different client side, can be subordinated in the Socket object of this client and obtain its IP address and port numbers, set the Mark field, be worth IP and port numbers for connecting.In the While circulation, the thread polling message formation MessageQueue element topmost at each task place just takes out message wherein in case find corresponding Mark.Otherwise thread suspension.
Inventive point 3, solution have the problem of obstruction when receiving data
In the whole system, client is clicked the interface, and program is sent packet to server, accomplishes function corresponding by server then, so the characteristics of task are low to the occupancy of CPU, but the I/O operation of blocking through regular meeting.If the user does not click the interface for a long time, the worker thread in the thread pool will be taken by this user all the time, can't carry out any task.If all threads all are in blocked state in the thread pool, then initiate task can't be processed.And receiving end when the transmitting terminal data volume is excessive possibly can't receive fully.
Address this problem and can define nIdx, three variablees of nTotalLen and nReadLen are deposited the byte number of reading altogether respectively, byte number that will read altogether and the byte number read in the circulation once, and the value of nTotalLen can be by the field decision in packet header.The While circulation lasts till that reading the inlet flow end jumps out.For fear of the mode of this unsound use CPU of busy waiting, second thread suspension that the while circulation is responsible for not having data to import waken up when input is arranged again here.
After closing, inventive point 4, solution Socket still carry out the problem of read-write operation
Because closing of Socket controlled by receiving thread, therefore possibly before sending thread operation completion, just Socket closed by receiving thread.Set up informing mechanism, let send and send a notice to receiving thread after thread execution finishes, receiving thread is closed Socket can address this problem then.As shown in Figure 3; Add protocol fields PacSeq, the request package of receiving is numbered, for example receiving thread is received the bag of PacSeq=x; Send notice to receiving thread when sending after thread is beamed back the required result of this request package, receiving thread is receiving that this notice keeps the Socket connection status always.
The problem that inventive point 5, solution unexpected client are broken off
This problem can be solved through algorithm as shown in Figure 4: client was whenever sent a heartbeat packet at a distance from 5 minutes; Server end is created a daemon thread by receiving thread and is responsible for monitoring this heartbeat packet and carries out countdown; Receiving thread receives that heartbeat packet do not handle; Just send notice and timer is reset, continue then to keep being connected with client to daemon thread.If server was not received in 5 minutes, then daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread.Because in the countdown stage, three threads all are in suspended state, so occupying system resources not.Client certain hour after the request of sending is confiscated feedback or then qualifying server disconnection unusually occurred.Resend connection request after the disconnection.

Claims (10)

1. the socket implementation method towards the Internet of Things platform is characterized in that the communication layers at the Internet of Things platform, utilizes Buffer Pool, thread pool to set up multi-thread concurrent and connects, and proposes a kind of efficient socket server method for designing, and is specific as follows:
1) uses thread pool technique construction socket server;
2) design system operation support scheme;
3) analyse in depth the working mechanism of thread pool, calculate the expense of dynamic thread pool when the request that faces above its capacity, propose to use Buffer Pool store excess connection request;
4) master-plan is optimized, tackles common emergency case.
2. a kind of socket implementation method according to claim 1 towards the Internet of Things platform; It is characterized in that described use thread pool technique construction socket server; Comprise communication module design and thread pool design; This method has added Buffer Pool and has received the excess thread before thread pool, when idle thread occurring in the thread pool, from Buffer Pool, take out new connection request, and socket server is used to accomplish two network communications between the program; Server must provide IP address and the port numbers of oneself, and client connects to the request of the corresponding port of this address; Detailed process is: server end is set up the connection that server socket is monitored client; Connect after receiving request; And take out message and handle; Result after handling is returned, and user end to server sends connection request, and the back that connects just receives message to the server transmission or from server.
3. a kind of socket implementation method according to claim 1 towards the Internet of Things platform; It is characterized in that described system operation support scheme is: system uses and makes agreement by oneself; Client and server transmits and receive data to wrap and communicates, and the packet size is for being about 100Bytes; Form with packet after the system handles message is returned message; Send and two threads of reception in order to reach duplexing, to set, and the transmission thread is the sub-thread of receiving thread; The message that server is received is put into and is received message queue and transfer to operation layer and handle, and when operation layer disposes, each thread removes to send message queue and takes out one's own result and send then.
4. a kind of socket implementation method towards the Internet of Things platform according to claim 1 is characterized in that described common emergency case is 1) task of mark different clients how; Has obstruction when 2) receiving data; 3) still carry out read-write operation after Socket closes; 4) unexpected client is broken off four kinds of situation.
5. a kind of socket implementation method towards the Internet of Things platform according to claim 2 is characterized in that described communication module design, and basic design scheme is:
In order to satisfy the requirement of duplex communication, promptly server and client side's while transceive data needs to set up and sends; Receive two threads, what deposit in the thread pool is receiving thread, and this thread takes out the Socket connection after connecting; Receive message and be sent to the upper strata, set up simultaneously and send thread, monitor and send message queue through receiving message queue; Promptly, deposit the result that handle on the upper strata, take out the result and send with the interface on upper strata; Meanwhile, receiving thread is in blocked state, and client-requested is broken off, and then closes Socket and connects; In addition, too much for preventing linking number, will put into Buffer Pool above the connection of Thread Count in the pond, occur idle thread by the time and take out again.
6. a kind of socket implementation method towards the Internet of Things platform according to claim 2 is characterized in that described thread pool design, and key issue is the problem that is provided with of thread pool size, best thread pool size n *Definite method following:
Foundation and destruction that two main expenses in the threading are threads, and the maintenance of thread.If first kind of expense is C1, second kind of expense is C2, and the C1 major part is the time for the thread storage allocation, and C2 then is the context switching time of thread; C1 in the actual conditions>>C2, suppose that thread pool size is n, n is a definite value, and the Thread Count of current operation is r, and the use of considering thread pool respectively is whether for the influence of systematic function:
1) when the Thread Count of surviving in the pond during less than pond big or small, 0≤r≤n only limits to switch between each thread in the expense that system under the situation of thread pool is arranged, i.e. C2n; And thread created and destroyed in system need to each new connection when not having thread pool, and expense is C1r, uses the scheme of thread pool will make systematic function promote C1r-C2n;
2) when the Thread Count of surviving in the pond during greater than pond big or small, r>n, number of tasks has surpassed the maximum of thread pool, and this moment, thread pool was necessary for unnecessary task creation new thread, and expense is C2n+C1 (r-n), and does not have the expense of thread pool still to be C1r; Use the scheme of thread pool to make systematic function promote C1n-C2n.
The thread of surviving in the thread pool in the actual environment is established the Thread Count that variable r is current operation constantly changing, and f (r) is its distribution law, and the expectation of thread pool size n is:
E ( n ) = Σ r = 0 n ( C 1 · r - C 2 · n ) · f ( r ) + Σ r = n + 1 ∞ ( C 1 · n - C 2 · n ) · f ( r ) - - - ( 1 )
If best thread pool size is n *, its expectation is:
E ( n * ) = sup n ∈ N E ( n ) - - - ( 2 )
N represent Thread Count in the thread pool might value set; The value upper bound of
Figure FDA0000136962100000023
expression E (n); (1) is rewritten into following form, and p (r) is the probability density function of the Thread Count of surviving in the thread pool
E ( n ) = ∫ 0 n ( C 1 · r - C 2 · n ) · p ( r ) dr + ∫ n ∞ ( C 1 · n - C 2 · n ) · p ( r ) dr - - - ( 3 )
In order to obtain the maximum of E (n), differentiate is carried out in (3), get formula (4):
dE dn = - C 2 + C 1 · ∫ 0 ∞ p ( r ) dr = 0 - - - ( 4 )
Rewrite (4), establishing and keeping active thread is ξ=C with the ratio of creating new thread 2/ C 1:
∫ n * ∞ p ( r ) dr = ξ ∫ 0 n * p ( r ) dr ≤ 1 - ξ - - - ( 5 )
Because thread pool size is an integer, so should value confirm by following formula:
∫ 0 n * p ( r ) dr ≤ 1 - ξ ∫ 0 n * + 1 p ( r ) dr > 1 - ξ - - - ( 6 )
This formula shows n *Relevant with ξ, when thread overhead in switching during much smaller than the expense of thread creation and destruction, the capacity of thread pool is bigger; Formula (5) formula (6) also shows n *Relevant with system present load p (r).
7. a kind of socket implementation method according to claim 5 towards the Internet of Things platform; The task of it is characterized in that described how mark different clients; Therefore its solution is: for this situation, because each task all belongs to different client side, be subordinated in the Socket object of this client and obtain its IP address and port numbers; The IP of client and the Mark field of port numbers are deposited in setting, are worth IP and port numbers for connecting; In the While circulation, the thread polling message formation MessageQueue element topmost at each task place just takes out message wherein in case find corresponding Mark, otherwise thread suspension.
8. a kind of socket method for designing according to claim 5 towards the Internet of Things platform; Having obstruction when it is characterized in that described reception data, is in the whole system, and client is clicked the interface; Program is sent packet to server; Accomplish function corresponding by server then, so the characteristics of task are low to the occupancy of CPU, but the I/O operation of blocking through regular meeting; If the user does not click the interface for a long time, the worker thread in the thread pool will be taken by this user all the time, can't carry out any task; If all threads all are in blocked state in the thread pool, then initiate task can't be processed, and the transmitting terminal data volume is received end when excessive and possibly can't be received fully.
Solution is: definition nIdx, and three variablees of nTotalLen and nReadLen, nIdx is used to deposit the byte number of reading altogether, and nTotalLen is that the byte number and the nReadLen that will read altogether are the byte numbers of reading in once circulating; The value of nTotalLen can be by the field decision in packet header; The While circulation lasts till that reading the inlet flow end jumps out; For fear of the mode of this unsound use CPU of busy waiting, second thread suspension that the while circulation is responsible for not having data to import waken up when input is arranged again here.
9. a kind of socket method for designing according to claim 5 towards the Internet of Things platform; It is characterized in that still carrying out read-write operation after described Socket closes; Solution is: because closing of Socket controlled by receiving thread, therefore possibly before sending thread operation completion, just Socket closed by receiving thread; Set up informing mechanism; Let send and send a notice to receiving thread after thread execution finishes; Receiving thread is closed Socket can address this problem then, promptly adds protocol fields PacSeq, and PacSeq is the numbering of the request package received; Send notice to receiving thread when sending after thread is beamed back the required result of this request package, receiving thread is receiving that this notice keeps the Socket connection status always.
10. a kind of socket method for designing according to claim 5 towards the Internet of Things platform; It is characterized in that described unexpected client disconnection; Solution is: client is every sent a heartbeat packet at a distance from 5 minutes, server end is responsible for monitoring this heartbeat packet and is carried out countdown by daemon thread of receiving thread establishment, and receiving thread receives that heartbeat packet do not handle; Just send notice and timer is reset, continue then to keep being connected with client to daemon thread; If server was not received in 5 minutes, then daemon thread sends notice, wakes receiving thread up, closes socket by receiving thread; Because in the countdown stage, three threads all are in suspended state, so occupying system resources not, client is confiscated feedback or occur unusually then that qualifying server breaks off sending request back certain hour, resends connection request after the disconnection.
CN201210038597.XA 2012-02-20 2012-02-20 Internet of things platform-oriented socket implementation method Active CN102546437B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210038597.XA CN102546437B (en) 2012-02-20 2012-02-20 Internet of things platform-oriented socket implementation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210038597.XA CN102546437B (en) 2012-02-20 2012-02-20 Internet of things platform-oriented socket implementation method

Publications (2)

Publication Number Publication Date
CN102546437A true CN102546437A (en) 2012-07-04
CN102546437B CN102546437B (en) 2014-10-22

Family

ID=46352425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210038597.XA Active CN102546437B (en) 2012-02-20 2012-02-20 Internet of things platform-oriented socket implementation method

Country Status (1)

Country Link
CN (1) CN102546437B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102916953A (en) * 2012-10-12 2013-02-06 青岛海信传媒网络技术有限公司 Method and device for realizing concurrent service on basis of TCP (transmission control protocol) connection
CN104683460A (en) * 2015-02-15 2015-06-03 青岛海尔智能家电科技有限公司 Communication method, device and server for Internet of things
CN104735077A (en) * 2015-04-01 2015-06-24 积成电子股份有限公司 Method for realizing efficient user datagram protocol (UDP) concurrence through loop buffers and loop queue
CN104850460A (en) * 2015-06-02 2015-08-19 上海斐讯数据通信技术有限公司 Service program thread management method
CN105323319A (en) * 2015-11-09 2016-02-10 深圳市江波龙科技有限公司 Communication method and system for IOT equipment
CN105740326A (en) * 2016-01-21 2016-07-06 腾讯科技(深圳)有限公司 Thread state monitoring method and device for browser
CN105843592A (en) * 2015-01-12 2016-08-10 芋头科技(杭州)有限公司 System for implementing script operation in preset embedded system
CN106656436A (en) * 2016-09-29 2017-05-10 安徽华速达电子科技有限公司 Communication management method and system based on intelligent optical network unit
CN106997307A (en) * 2017-02-13 2017-08-01 上海大学 A kind of Socket thread pool design methods towards multiple terminals radio communication
CN107147663A (en) * 2017-06-02 2017-09-08 广东暨通信息发展有限公司 The synchronous communication method and system of a kind of computer cluster
CN107332735A (en) * 2017-07-04 2017-11-07 四川长虹技佳精工有限公司 The network communication method of Auto-reconnect after disconnection
CN107454177A (en) * 2017-08-15 2017-12-08 合肥丹朋科技有限公司 The dynamic realizing method of network service
CN107783848A (en) * 2017-09-27 2018-03-09 歌尔科技有限公司 A kind of JSON command handling methods and device based on socket communication
CN108075947A (en) * 2017-07-31 2018-05-25 北京微应软件科技有限公司 The connective maintaining method of storage device, PC ends, communication connection and system
CN108121598A (en) * 2016-11-29 2018-06-05 中兴通讯股份有限公司 Socket buffer resource management and device
CN108293067A (en) * 2015-12-23 2018-07-17 英特尔公司 Traffic congestion is managed for internet of things equipment
CN108566390A (en) * 2018-04-09 2018-09-21 中国科学院信息工程研究所 The implementation method and satellite message of a kind of satellite application layer security protocol are monitored and distribution service system
CN109428926A (en) * 2017-08-31 2019-03-05 北京京东尚科信息技术有限公司 A kind of method and apparatus of scheduler task node
CN109450838A (en) * 2018-06-27 2019-03-08 北京班尼费特科技有限公司 A kind of intelligent express delivery cabinet network communication protocol based on Intelligent internet of things interaction platform
CN109727595A (en) * 2018-12-29 2019-05-07 神思电子技术股份有限公司 A kind of software design approach of speech recognition server
CN111859082A (en) * 2020-05-27 2020-10-30 伏羲科技(菏泽)有限公司 Identification analysis method and device
CN111858046A (en) * 2020-07-13 2020-10-30 海尔优家智能科技(北京)有限公司 Service request processing method and device, storage medium and electronic device
CN111917852A (en) * 2020-07-23 2020-11-10 上海珀立信息科技有限公司 Multi-person network synchronization system based on Unity and development method
CN113438247A (en) * 2021-06-29 2021-09-24 四川巧夺天工信息安全智能设备有限公司 Method for processing data interaction conflict in socket channel
CN114785846A (en) * 2022-03-23 2022-07-22 南京邮电大学 Heartbeat monitoring method and system based on ProtoBuf protocol
CN116755863A (en) * 2023-08-14 2023-09-15 北京前景无忧电子科技股份有限公司 Socket thread pool design method for multi-terminal wireless communication

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527719A (en) * 2009-04-27 2009-09-09 成都科来软件有限公司 Method for parallel analyzing TCP data flow

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527719A (en) * 2009-04-27 2009-09-09 成都科来软件有限公司 Method for parallel analyzing TCP data flow

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
YIBEI LING,ET AL: "Analysis of Optimal Thread Pool Size", 《ACM SIGOPS OPERATING SYSTEMS REVIEW》, vol. 34, no. 2, 14 February 2000 (2000-02-14), pages 4 - 7 *
刘焱旺: "基于Web的实时控制系统的研究与设计", 《中国优秀硕士学位论文全文数据库 信息科学辑》, no. 5, 15 May 2009 (2009-05-15), pages 140 - 231 *
周凤石: "基于Windows Socket的网络通信中的心跳机制原理及其实现", 《沙洲职业工学院学报》, vol. 12, no. 3, 30 September 2009 (2009-09-30), pages 17 - 21 *
夏玲: "客户端与服务器端的Socket通信", 《电脑编程技巧与维护》, no. 17, 23 October 2009 (2009-10-23) *
赵文清: "基于Socket的并发服务器的Java语言实现", 《现代电子技术》, no. 2, 28 February 2002 (2002-02-28) *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102916953B (en) * 2012-10-12 2016-03-09 青岛海信传媒网络技术有限公司 The method and the device that realize concurrent services is connected based on TCP
CN102916953A (en) * 2012-10-12 2013-02-06 青岛海信传媒网络技术有限公司 Method and device for realizing concurrent service on basis of TCP (transmission control protocol) connection
CN105843592A (en) * 2015-01-12 2016-08-10 芋头科技(杭州)有限公司 System for implementing script operation in preset embedded system
CN104683460A (en) * 2015-02-15 2015-06-03 青岛海尔智能家电科技有限公司 Communication method, device and server for Internet of things
CN104683460B (en) * 2015-02-15 2019-08-16 青岛海尔智能家电科技有限公司 A kind of communication means of Internet of Things, device and server
CN104735077A (en) * 2015-04-01 2015-06-24 积成电子股份有限公司 Method for realizing efficient user datagram protocol (UDP) concurrence through loop buffers and loop queue
CN104735077B (en) * 2015-04-01 2017-11-24 积成电子股份有限公司 It is a kind of to realize the efficiently concurrent methods of UDP using Circular buffer and circle queue
CN104850460A (en) * 2015-06-02 2015-08-19 上海斐讯数据通信技术有限公司 Service program thread management method
CN105323319A (en) * 2015-11-09 2016-02-10 深圳市江波龙科技有限公司 Communication method and system for IOT equipment
CN108293067B (en) * 2015-12-23 2021-06-25 英特尔公司 Managing communication congestion for internet of things devices
CN108293067A (en) * 2015-12-23 2018-07-17 英特尔公司 Traffic congestion is managed for internet of things equipment
CN105740326A (en) * 2016-01-21 2016-07-06 腾讯科技(深圳)有限公司 Thread state monitoring method and device for browser
CN106656436A (en) * 2016-09-29 2017-05-10 安徽华速达电子科技有限公司 Communication management method and system based on intelligent optical network unit
CN108121598A (en) * 2016-11-29 2018-06-05 中兴通讯股份有限公司 Socket buffer resource management and device
CN106997307A (en) * 2017-02-13 2017-08-01 上海大学 A kind of Socket thread pool design methods towards multiple terminals radio communication
CN107147663A (en) * 2017-06-02 2017-09-08 广东暨通信息发展有限公司 The synchronous communication method and system of a kind of computer cluster
CN107332735A (en) * 2017-07-04 2017-11-07 四川长虹技佳精工有限公司 The network communication method of Auto-reconnect after disconnection
CN108075947B (en) * 2017-07-31 2024-02-27 北京微应软件科技有限公司 Storage device, PC (personal computer) end and maintenance method and system of communication connection connectivity
CN108075947A (en) * 2017-07-31 2018-05-25 北京微应软件科技有限公司 The connective maintaining method of storage device, PC ends, communication connection and system
CN107454177A (en) * 2017-08-15 2017-12-08 合肥丹朋科技有限公司 The dynamic realizing method of network service
CN109428926A (en) * 2017-08-31 2019-03-05 北京京东尚科信息技术有限公司 A kind of method and apparatus of scheduler task node
CN109428926B (en) * 2017-08-31 2022-04-12 北京京东尚科信息技术有限公司 Method and device for scheduling task nodes
CN107783848A (en) * 2017-09-27 2018-03-09 歌尔科技有限公司 A kind of JSON command handling methods and device based on socket communication
CN108566390A (en) * 2018-04-09 2018-09-21 中国科学院信息工程研究所 The implementation method and satellite message of a kind of satellite application layer security protocol are monitored and distribution service system
CN108566390B (en) * 2018-04-09 2020-03-17 中国科学院信息工程研究所 Satellite message monitoring and distributing service system
CN109450838A (en) * 2018-06-27 2019-03-08 北京班尼费特科技有限公司 A kind of intelligent express delivery cabinet network communication protocol based on Intelligent internet of things interaction platform
CN109727595A (en) * 2018-12-29 2019-05-07 神思电子技术股份有限公司 A kind of software design approach of speech recognition server
CN111859082A (en) * 2020-05-27 2020-10-30 伏羲科技(菏泽)有限公司 Identification analysis method and device
CN111858046A (en) * 2020-07-13 2020-10-30 海尔优家智能科技(北京)有限公司 Service request processing method and device, storage medium and electronic device
CN111858046B (en) * 2020-07-13 2024-05-24 海尔优家智能科技(北京)有限公司 Service request processing method and device, storage medium and electronic device
CN111917852A (en) * 2020-07-23 2020-11-10 上海珀立信息科技有限公司 Multi-person network synchronization system based on Unity and development method
CN113438247A (en) * 2021-06-29 2021-09-24 四川巧夺天工信息安全智能设备有限公司 Method for processing data interaction conflict in socket channel
CN114785846A (en) * 2022-03-23 2022-07-22 南京邮电大学 Heartbeat monitoring method and system based on ProtoBuf protocol
CN114785846B (en) * 2022-03-23 2024-05-24 南京邮电大学 Method and system for heartbeat monitoring based on ProtoBuf protocol
CN116755863A (en) * 2023-08-14 2023-09-15 北京前景无忧电子科技股份有限公司 Socket thread pool design method for multi-terminal wireless communication
CN116755863B (en) * 2023-08-14 2023-10-24 北京前景无忧电子科技股份有限公司 Socket thread pool design method for multi-terminal wireless communication

Also Published As

Publication number Publication date
CN102546437B (en) 2014-10-22

Similar Documents

Publication Publication Date Title
CN102546437A (en) Internet of things platform-oriented socket implementation method
CN102004670B (en) Self-adaptive job scheduling method based on MapReduce
CN102916953A (en) Method and device for realizing concurrent service on basis of TCP (transmission control protocol) connection
Cross et al. Duelling timescales of host movement and disease recovery determine invasion of disease in structured populations
CN110532076A (en) Method, system and equipment for creating cloud resources and readable storage medium
CN101663647A (en) Device that determines whether to launch an application locally or remotely as a webapp
CN103246550A (en) Multitask dispatching method and system based on capacity
CN101917490A (en) Method and system for reading cache data
CN105007337A (en) Cluster system load balancing method and system thereof
CN103581336B (en) Service flow scheduling method and system based on cloud computing platform
CN107818056A (en) A kind of queue management method and device
CN103164287A (en) Distributed-type parallel computing platform system based on Web dynamic participation
CN105357250B (en) A kind of data operation system
CN113938516A (en) Method and system for synchronously realizing transaction processing of heterogeneous system
CN107135279A (en) It is a kind of to handle the method and apparatus that request is set up in long connection
CN109218369A (en) remote procedure call request control method and device
Liao et al. Energy and performance management in large data centers: A queuing theory perspective
CN103677968B (en) Transaction methods, affairs coordinator device, affairs participant's apparatus and system
CN102025783A (en) Cluster system, message processing method thereof and protocol forward gateway
CN102724132A (en) Method and device for improving transmission control protocol (TCP) connection multiplexing processing efficiency
CN102217247B (en) Method, apparatus and system for implementing multiple web application requests scheduling
CN116244081B (en) Multi-core calculation integrated accelerator network topology structure control system
CN108459941A (en) A kind of method and system of distributed data acquisition and software supervision
CN115567594A (en) Microservice request processing method, microservice request processing device, computer equipment and storage medium
CN201813401U (en) System for reading buffer data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20120704

Assignee: Jiangsu Nanyou IOT Technology Park Ltd.

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: 2016320000211

Denomination of invention: Internet of things platform-oriented socket implementation method

Granted publication date: 20141022

License type: Common License

Record date: 20161114

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model
EC01 Cancellation of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: Jiangsu Nanyou IOT Technology Park Ltd.

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: 2016320000211

Date of cancellation: 20180116

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200610

Address after: Room 408, block D, Caiying building, No.99 Tuanjie Road, Jiangbei new district, Nanjing, Jiangsu

Patentee after: Jiangsu Jiangxin Electronic Technology Co., Ltd

Address before: 210003, No. 66, new exemplary Road, Nanjing, Jiangsu

Patentee before: NANJING University OF POSTS AND TELECOMMUNICATIONS

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210507

Address after: 210009 3-1-1902, talent apartment, 32 dingjiaqiao, Gulou District, Nanjing City, Jiangsu Province

Patentee after: Wang Kun

Address before: Room 408, block D, Yingying building, 99 Tuanjie Road, Jiangbei new district, Nanjing City, Jiangsu Province, 211899

Patentee before: Jiangsu Jiangxin Electronic Technology Co., Ltd