CN110543373A - method for accessing kernel by user thread - Google Patents
method for accessing kernel by user thread Download PDFInfo
- Publication number
- CN110543373A CN110543373A CN201911063323.4A CN201911063323A CN110543373A CN 110543373 A CN110543373 A CN 110543373A CN 201911063323 A CN201911063323 A CN 201911063323A CN 110543373 A CN110543373 A CN 110543373A
- Authority
- CN
- China
- Prior art keywords
- pointer
- count value
- shared access
- access
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/546—Message passing systems or structures, e.g. queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/54—Indexing scheme relating to G06F9/54
- G06F2209/548—Queue
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Storage Device Security (AREA)
Abstract
a method for a user thread to access a kernel uses an access queue based on atomic operations. Recording the number of times of obtaining shared access when accessing the target object: and when the count value is not 0, the exclusive access cannot be acquired. Each user process contains multiple user threads and an access queue. When the system interface is called, the calling parameter enters a kernel state and is stored in the thread switching context. When read-only access is carried out, shared access is obtained through an access queue; and acquiring exclusive access when modifying the related resources. The invention effectively avoids the excessive occupation of the kernel state stack by the user thread.
Description
Technical Field
the invention relates to the field of computer system software programming, in particular to a method for accessing a kernel by a user thread.
Background
In computer operating system software, access to kernel resources and devices by applications is achieved by calling system interfaces (APIs) in which a thread running in a user mode is brought into a kernel mode by a special system call instruction, and a stack pointer is also switched to a stack in a kernel space. For a general API, the calling period is short, and the user state is returned quickly. However, when the kernel resource needs to be accessed in a mutually exclusive manner or in a read-only manner, a mutually exclusive lock or a read-write lock needs to be used, and the use of the lock mechanism may cause the user thread to hang on the stack of the kernel space. Since some temporary variables prior to the lock operation must be resident on the stack for the thread to resume run-time use, essentially every user thread needs to occupy a kernel-state stack, which can be wasteful of kernel space.
disclosure of Invention
The present invention is directed to overcoming the above-mentioned deficiencies of the prior art and providing a mechanism for implementing mutual exclusion or shared kernel access among multiple user threads through an access queue, so that the user threads do not need to reside on the stack of the kernel space when the kernel access resource is suspended.
The technical scheme of the invention is as follows: a method for accessing kernel by user thread includes an access queue based on atomic operation, said queue is formed from head pointer, tail pointer, shared access count value and visitor node. The head pointer points to the first node of the queue and the tail pointer points to the last node of the queue. Each node comprises a link pointer, and the nodes are connected through the link pointers. When accessing the target object, the shared access count value is used to record the number of times the shared access is acquired, and the count value is decremented by 1 when the shared access is released, and when the count value is decremented to 0, it indicates that all shared accesses have ended. When the shared access count value is not 0, exclusive access to the target object cannot be acquired.
The method for the visitor to obtain the shared access through the access queue comprises the following steps:
And step 101, taking an original shared access count value and an original tail node pointer.
And 102, enabling the new shared access count value to be equal to the original shared access count value, enabling the new tail node pointer to be equal to the visitor node pointer and marking the shared access.
And 103, judging whether the original tail node pointer is a null pointer, and if the original tail node pointer is the null pointer, directly entering the step 105.
And 104, judging whether the original shared access count value is 0, and if so, directly entering the step 106.
and 105, adding 1 to the new shared access count value to enable the new tail node pointer to be equal to the original tail node pointer.
step 106. use atomic compare and swap (CAS) operation to attempt to simultaneously compare and atomically replace the shared access count value and the tail pointer with the original shared access count value and the original tail pointer with the new shared access count value and the new tail pointer.
step 107, returning to step 101 if the replacing operation of step 106 fails, and executing step 108 if the replacing operation of step 106 succeeds.
And 108, judging whether the new shared access count value is 0, if not, indicating that the shared access is successfully acquired and the operation step is finished, and if so, executing the step 109.
And step 109, pointing the link pointer of the original tail node to the new tail node, indicating that the shared access is failed to be acquired and the visitor node is added to the tail of the queue.
The method comprises the following steps when the visitor acquires the exclusive access through the access queue:
Step 201, an original shared access count value and an original tail node pointer are taken.
and step 202, enabling the new shared access count value to be equal to the original shared access count value, enabling the new tail node pointer to be equal to the visitor node pointer and marking the exclusive access.
Step 203. use atomic compare and swap (CAS) operation to attempt to simultaneously compare and atomically replace the shared access count value and the tail pointer with the original shared access count value and the original tail pointer with the new shared access count value and the new tail pointer.
And step 204, returning to the step 201 if the replacing operation of the step 203 fails, and executing the step 205 if the replacing operation of the step 203 succeeds.
Step 205, determine whether the original tail node pointer is a null pointer, if not, execute step 206, and if so, execute step 207.
And step 206, pointing the link pointer of the original tail node to the new tail node, which indicates that the exclusive access is failed to be acquired and the operation flow is finished.
And step 207, pointing the head pointer to the new tail node and judging whether the original shared access count value is 0, wherein if the original shared access count value is 0, the exclusive access is successfully acquired, and if the original shared access count value is not 0, the exclusive access is failed to acquire.
The method for releasing the shared access by the visitor through the access queue comprises the following steps:
and 301, taking the original shared access count value and the original tail node pointer.
and 302, enabling the new shared access count value to be equal to the original shared access count value minus 1, and enabling the new tail node pointer to be equal to the original tail node pointer.
Step 303. use atomic compare and swap (CAS) operation to attempt to simultaneously compare and atomically replace the shared access count value and the tail pointer with the original shared access count value and the original tail pointer with the new shared access count value and the new tail pointer.
And 304, returning to the step 301 if the replacing operation of the step 303 fails, and executing the step 305 if the replacing operation of the step 303 succeeds.
And 305, judging whether the new shared access count value is 0, if not, indicating that the shared access is successfully released and the operation flow is ended.
and step 306, judging whether the new tail node pointer is a null pointer, if so, indicating that the shared access is released successfully, and if not, indicating that the shared access is released unsuccessfully.
The method for releasing the queue by the visitor through the access queue comprises the following steps:
And step 401, taking the original shared access count value, the original tail node pointer and the original head node pointer.
step 402, judging whether the original tail node pointer is the same as the head node pointer, if not, directly entering step 407.
and step 403, enabling the new shared access counting value to be equal to the original shared access counting value, and enabling the new tail node pointer to be equal to the null pointer.
and 404, if the new shared access counting value is not 0, subtracting 1 from the new shared access counting value.
step 405. replace the head pointer with a null pointer, attempt to simultaneously compare and atomically replace the shared access count value and the tail pointer with the original shared access count value and the original tail pointer using an atomic compare and swap (CAS) operation with the new shared access count value and the new tail pointer.
Step 406. if the replacing operation of step 405 fails, returning to step 401; if the replace operation of step 405 is successful, indicating that the queue is completely freed, the operational flow ends.
Step 407, point the head pointer to the next node of the head node, and the operation flow is finished.
Further, a set of system interfaces and user processes are provided, each user process comprising a plurality of user threads and an access queue. The call parameters passed through the register are saved in the thread switching context after entering the kernel state, so that the call parameters are restored when the thread redos the system call.
because the calling process of the system interface is rebuilt by using the redo system call, any parameters and temporary variables do not need to reside on the kernel-mode stack.
and when the user thread needs to perform read-only access on the related resources of the process, obtaining the shared access through the access queue. When the user thread needs to modify the related resources of the process, the exclusive access is acquired through the access queue.
When the user thread acquires the shared access, whether the user thread is in the state of redoing the system call is judged firstly, if yes, the related resources of the process are directly accessed, if not, the shared access is acquired through the access queue, and if the shared access is successfully acquired, the related resources of the process are accessed.
When the user thread fails to acquire the shared access, the user thread is added to the access queue as a visitor node and then returns to the scheduler to be suspended. And after all exclusive accesses are processed, all threads which acquire the shared access in the access queue are taken out and set to be in the state of redo system call and are scheduled to execute the redo system call.
When the user thread obtains exclusive access, whether the user thread is in a state of redoing system call is judged, if yes, related resources of the process are accessed first, then the queue is released, and if the access queue is not completely released, the access in the queue is continuously processed. And if the user thread is not in the state of redoing the system call, acquiring exclusive access through the access queue, and if the exclusive access is successfully acquired, processing the access in the queue.
When the user thread fails to acquire exclusive access, the user thread is added to an access queue as a visitor node and then returns to the scheduler to be suspended. And if the user thread is the first node in the access queue, taking out the first thread in the queue after all shared accesses are processed, setting the first thread in the queue to be in the state of redo system call, and scheduling and executing the redo system call.
compared with the prior art, the invention has the following characteristics: and a mechanism for realizing mutual exclusion or shared access among a plurality of user threads through the access queue is used for scheduling the suspended threads to redo system call so as to rebuild the call process of the system interface. The technical scheme of the invention effectively avoids the occupation of the user thread to the kernel mode stack, so that the number of the kernel mode stack is basically proportional to the number of the processors in the system and cannot be increased along with the increase of the number of the user threads.
Drawings
FIG. 1 is a schematic flow diagram of obtaining shared access;
FIG. 2 is a flow diagram illustrating a process for obtaining exclusive access;
FIG. 3 is a schematic flow chart of releasing shared access;
Fig. 4 is a flow chart illustrating the process of releasing the queue.
Detailed Description
The method for accessing the kernel by the user thread according to the present invention is further described with reference to the accompanying drawings and the detailed description.
and establishing an access queue based on atomic operation, wherein the queue consists of a head pointer, a tail pointer, a shared access count value and a visitor node. The head pointer points to the first node of the queue and the tail pointer points to the last node of the queue. Each node comprises a link pointer, and the nodes are connected through the link pointers. When accessing the target object, the shared access count value is used to record the number of times the shared access is acquired, and the count value is decremented by 1 when the shared access is released, and when the count value is decremented to 0, it indicates that all shared accesses have ended. When the shared access count value is not 0, exclusive access to the target object cannot be acquired.
As shown in fig. 1, when acquiring the shared access, step 101 is executed to make a equal to the original shared access count value and B equal to the original tail node pointer. Step 102 equals C to a and D to the visitor node pointer marked with the shared access flag. Step 103 judges whether B is null pointer, if yes, jump to step 105 directly. Step 104 determines whether a is 0, if yes, the process jumps directly to step 106. Step 105 adds 1 to C to make D equal to B. Step 106 uses the CAS operation to determine if the shared access count value and the tail pointer are equal to A and B, and if so, the atomic replacements are C and D. Step 107 returns to step 101 to repeat the above steps until successful if the replacement is not successful. Step 108 determines whether C is 0, if yes, step 109 is executed to point the link pointer of B to D.
as shown in fig. 2, when acquiring the exclusive access, step 201 is executed to make a equal to the original shared access count value and B equal to the original tail node pointer. Step 202 equates C to a and D to the visitor node pointer marked with the exclusive access flag. Step 203 determines whether the shared access count value and the tail pointer are equal to A and B using a CAS operation, and if so, atomically replaces C and D. If the replacement is not successful, the step 204 returns to the step 201 to repeat the steps until the replacement is successful. Step 205 determines whether B is null: if not, step 206 is performed to replace the link pointer of B with D, and if yes, step 207 is performed to replace the first pointer with D.
As shown in fig. 3, when releasing the shared access, step 301 is executed to make a equal to the original shared access count value and B equal to the original tail node pointer. Step 302 equals C minus 1 and D equals B. Step 303 uses the CAS operation to determine if the shared access count value and the tail pointer are equal to A and B, and if so, the atomic replacements are C and D. If the replacement is not successful, the step 304 returns to the step 301 to repeat the steps until the replacement is successful. If C is not 0, step 305 indicates that the release of the shared access is successful, and the operation flow ends. If D is a null pointer, step 306 indicates that the shared access is successfully released, and the operation flow ends.
as shown in fig. 4, when releasing the queue, step 401 is executed to make a equal to the original shared access count value, B equal to the original tail pointer, and F equal to the head pointer. Step 402 determines whether B is equal to F, and if not, directly jumps to step 407. Step 403 makes C equal to a and D equal to null pointer. Step 404 determines whether C is 0, and if not, decrements C by 1. Step 405 determines whether the shared access count value and the tail pointer are equal to A and B using a CAS operation, and if so, atomically replaces C and D. Step 406 returns to step 401 to repeat the above steps until the replacement is successful if the replacement is not successful, and the operation flow ends if the replacement is successful. Step 407 directs the head pointer to the node next to the head node.
A set of system interfaces and a plurality of user processes are provided, each user process comprising a plurality of user threads and an access queue. When the user thread calls the system interface, the calling parameters transmitted by the register are saved in the thread switching context after entering the kernel state, so that the calling parameters are recovered when the thread redos the system call.
Redoing the system call refers to a system call instruction that enters the kernel before the thread is scheduled to return to the user mode from the suspended state for re-execution, and all register values are restored from the thread switch context before being scheduled to return to the user mode. Because the calling process of the system interface is rebuilt by using the redo system call, any parameters and temporary variables do not need to reside on the kernel-mode stack.
And when the user thread needs to perform read-only access on the related resources of the process, obtaining the shared access through the access queue. When the user thread needs to modify the related resources of the process, the exclusive access is acquired through the access queue.
When the user thread acquires the shared access, whether the user thread is in the state of redoing the system call is judged firstly, if yes, the related resources of the process are directly accessed, if not, the shared access is acquired through the access queue, and if the shared access is successfully acquired, the related resources of the process are accessed.
When the user thread fails to acquire the shared access, the user thread is added to the access queue as a visitor node and then returns to the scheduler to be suspended. And after all exclusive accesses are processed, all threads which acquire the shared access in the access queue are taken out and set to be in the state of redo system call and are scheduled to execute the redo system call.
When the user thread obtains exclusive access, whether the user thread is in a state of redoing system call is judged, if yes, related resources of the process are accessed first, then the queue is released, and if the access queue is not completely released, the access in the queue is continuously processed. And if the user thread is not in the state of redoing the system call, acquiring exclusive access through the access queue, and if the exclusive access is successfully acquired, processing the access in the queue.
When the user thread fails to acquire exclusive access, the user thread is added to an access queue as a visitor node and then returns to the scheduler to be suspended. And if the user thread is the first node in the access queue, taking out the first thread in the queue after all shared accesses are processed, setting the first thread in the queue to be in the state of redo system call, and scheduling and executing the redo system call.
In the field of computer system software programming, a thread switch context refers to a set of register values and state values used to save the running state of a processor at the time of thread switch, and when a thread is scheduled to resume running, the register values and state values are restored to the processor, so that the processor returns to the running state before the thread is suspended. A system call is a process by which a thread of an application program enters a kernel state from a user state by executing a special instruction of a processor. These instructions include syscall, sysester and various interrupt instructions on the Intel x86 processor.
The present invention relates to the concept of atomic operations in the field of computer technology, and those skilled in the art can implement these atomic operations on different processor platforms by using the relevant machine instructions of the platform. For example, the atomic compare and swap (CAS) operations used in the present invention may be implemented using LOCK CMPXCHG and LOCK CMPXCHG8B instructions on an Intel x86 processor platform, and may also be implemented using LOCK CMPXCHG16B instructions in a 64-bit mode, and different processor hardware platforms implement different machine instructions for these atomic operations, and such differences should not be construed as being beyond the scope of the present invention.
Claims (10)
1. A method for accessing a kernel by a user thread is characterized in that the method uses an access queue based on atomic operation, and the access queue is composed of a head pointer, a tail pointer, a shared access count value and an accessor node; the head pointer points to the first node of the queue, and the tail pointer points to the last node of the queue; each node comprises a link pointer, and the nodes are connected through the link pointers; the sharing access count value records the number of times of obtaining sharing access, the sharing access count value is added with 1 when a target object is accessed, the count value is subtracted by 1 when the sharing access is released, and when the count value is subtracted to 0, all the sharing access is finished; when the shared access count value is not 0, exclusive access to the target object cannot be acquired.
2. the method of claim 1, wherein the step of obtaining shared access comprises:
Step 101, taking an original shared access count value and an original tail node pointer;
Step 102, making the new shared access count value equal to the original shared access count value, making the new tail node pointer equal to the visitor node pointer and marking the shared access;
Step 103, judging whether the original tail node pointer is a null pointer, if so, directly entering step 105, and if not, entering step 104;
step 104, judging whether the original shared access count value is 0, if so, directly entering step 106, and if not, directly entering step 105;
step 105, adding 1 to the new shared access count value to enable the new tail node pointer to be equal to the original tail node pointer;
Step 106, using atomic comparison and exchange operation to try to simultaneously compare the shared access count value and the tail pointer with the original shared access count value and the original tail node pointer and atomically replace the shared access count value and the tail pointer with a new shared access count value and a new tail node pointer respectively;
step 107, returning to step 101 if the replacing operation of step 106 fails, and executing step 108 if the replacing operation of step 106 succeeds;
Step 108, judging whether the new shared access count value is 0, if not, indicating that the shared access is successfully obtained and the operation flow is finished, and if so, executing step 109;
And step 109, pointing the link pointer of the original tail node to the new tail node, indicating that the shared access is failed to be acquired and the visitor node is added to the tail of the queue, and ending the operation flow.
3. the method according to claim 1, wherein the step of obtaining the exclusive access comprises:
Step 201, an original shared access count value and an original tail node pointer are taken;
step 202, making the new shared access count value equal to the original shared access count value, making the new tail node pointer equal to the visitor node pointer and marking the exclusive access;
step 203, using atomic comparison and exchange operation to try to simultaneously compare the shared access count value and the tail pointer with the original shared access count value and the original tail node pointer and atomically replace the shared access count value and the tail pointer with a new shared access count value and a new tail node pointer respectively;
Step 204, if the replacing operation of the step 203 fails, returning to the step 201, and if the replacing operation of the step 203 succeeds, executing the step 205;
Step 205, judging whether the original tail node pointer is a null pointer, if not, executing step 206, and if so, executing step 207;
step 206, pointing the link pointer of the original tail node to the new tail node, ending the operation flow and acquiring the exclusive access failure;
step 207, the head pointer points to the new tail node, and the operation flow is finished; if the original shared access count value is 0, the exclusive access acquisition is successful, and if the original shared access count value is not 0, the exclusive access acquisition is failed.
4. the method of claim 1, wherein releasing shared access comprises:
Step 301, taking an original shared access count value and an original tail node pointer;
step 302, the new shared access count value is equal to the original shared access count value minus 1, and the new tail node pointer is equal to the original tail node pointer;
Step 303, using atomic comparison and exchange operation to try to simultaneously compare the shared access count value and the tail pointer with the original shared access count value and the original tail node pointer and atomically replace the shared access count value and the tail pointer with a new shared access count value and a new tail node pointer respectively;
Step 304, if the replacing operation of the step 303 fails, returning to the step 301, and if the replacing operation of the step 303 succeeds, executing the step 305;
Step 305, judging whether the new shared access count value is 0, if so, executing step 306, and if not, indicating that the shared access is released successfully and the operation flow is ended;
Step 306, judging whether the new tail end point pointer is a null pointer or not, and ending the operation flow; if the new tail node pointer is a null pointer, the shared access is successfully released, and if the new tail node pointer is not a null pointer, the shared access is failed to release.
5. the method of claim 1, wherein the step of releasing the queue comprises:
Step 401, taking an original shared access count value, an original tail node pointer and an original head node pointer;
Step 402, judging whether the original tail node pointer is the same as the head node pointer, if so, entering step 403, and if not, entering step 407;
Step 403, making the new shared access count value equal to the original shared access count value, and making the new tail node pointer equal to the null pointer;
Step 404, if the new shared access count value is 0, directly executing step 405; if the new shared access count value is not 0, subtracting 1 from the new shared access count value, and then executing step 405;
Step 405, replacing the head pointer with a null pointer, using atomic comparison and exchange operation to try to simultaneously compare and atomically replace the shared access count value and the tail pointer with the original shared access count value and the original tail node pointer with a new shared access count value and a new tail node pointer respectively;
Step 406. if the replacing operation of step 405 fails, returning to step 401; if the replacing operation in step 405 is successful, i.e. the queue is completely released, the operation flow ends;
Step 407, point the head pointer to the next node of the head node, and the operation flow is finished.
6. A method for accessing a kernel by a user thread according to any one of claims 1 to 5, wherein the method comprises providing a set of system interfaces and user processes, each user process comprising a plurality of user threads and an access queue; when a user thread calls a system interface, the calling parameters transmitted by the register are stored in the thread switching context after entering the kernel state, so that the calling parameters are recovered when the thread redos system calling; when a user thread needs to perform read-only access on related resources of a process, obtaining shared access through an access queue; when the user thread needs to modify the related resources of the process, the exclusive access is acquired through the access queue.
7. the method for accessing the kernel by the user thread according to claim 6, wherein when the user thread obtains the shared access, it is determined whether the user thread is in a state of redoing the system call, if so, the related resources of the process are directly accessed, otherwise, the shared access is obtained through the access queue; and if the shared access is successfully obtained, accessing the related resources of the process.
8. The method according to claim 7, wherein when the user thread fails to obtain the shared access, the user thread is added to the access queue as a visitor node and then returns to the scheduler to be suspended; after all exclusive accesses are processed, all threads which acquire shared access in the access queue are taken out and set to be in a redo system call state, and are scheduled to execute the redo system call.
9. The method for accessing the kernel by the user thread according to claim 6, wherein when the user thread obtains the exclusive access, it is firstly determined whether the user thread is in a state of redoing the system call, if so, the related resources of the process are firstly accessed and then the queue is released, and if the access queue is not completely released, the access in the queue is continuously processed; and if the user thread is not in the state of redoing the system call, acquiring exclusive access through the access queue, and if the exclusive access is successfully acquired, processing the access in the queue.
10. The method according to claim 9, wherein when the user thread fails to acquire exclusive access, the user thread is added to the access queue as a visitor node and then returns to the scheduler to suspend; and if the user thread is the first node in the access queue, taking out the first thread in the queue after all shared accesses are processed, setting the first thread in the queue to be in the state of redo system call, and scheduling and executing the redo system call.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911063323.4A CN110543373B (en) | 2019-11-04 | 2019-11-04 | Method for accessing kernel by user thread |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911063323.4A CN110543373B (en) | 2019-11-04 | 2019-11-04 | Method for accessing kernel by user thread |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110543373A true CN110543373A (en) | 2019-12-06 |
CN110543373B CN110543373B (en) | 2020-03-17 |
Family
ID=68716053
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911063323.4A Active CN110543373B (en) | 2019-11-04 | 2019-11-04 | Method for accessing kernel by user thread |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110543373B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113778674A (en) * | 2021-08-31 | 2021-12-10 | 上海弘积信息科技有限公司 | Lock-free implementation method of load balancing equipment configuration management under multi-core |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103294753A (en) * | 2012-01-30 | 2013-09-11 | 辉达公司 | Lock-free fifo |
US20140281349A1 (en) * | 2013-03-15 | 2014-09-18 | Genband Us Llc | Receive-side scaling in a computer system |
CN104615445A (en) * | 2015-03-02 | 2015-05-13 | 长沙新弘软件有限公司 | Equipment IO queue method based on atomic operation |
CN104809027A (en) * | 2015-04-21 | 2015-07-29 | 浙江大学 | Data collection method based on lock-free buffer region |
US9672038B2 (en) * | 2014-09-16 | 2017-06-06 | Oracle International Corporation | System and method for supporting a scalable concurrent queue in a distributed data grid |
-
2019
- 2019-11-04 CN CN201911063323.4A patent/CN110543373B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103294753A (en) * | 2012-01-30 | 2013-09-11 | 辉达公司 | Lock-free fifo |
US20140281349A1 (en) * | 2013-03-15 | 2014-09-18 | Genband Us Llc | Receive-side scaling in a computer system |
US9672038B2 (en) * | 2014-09-16 | 2017-06-06 | Oracle International Corporation | System and method for supporting a scalable concurrent queue in a distributed data grid |
CN104615445A (en) * | 2015-03-02 | 2015-05-13 | 长沙新弘软件有限公司 | Equipment IO queue method based on atomic operation |
CN104809027A (en) * | 2015-04-21 | 2015-07-29 | 浙江大学 | Data collection method based on lock-free buffer region |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113778674A (en) * | 2021-08-31 | 2021-12-10 | 上海弘积信息科技有限公司 | Lock-free implementation method of load balancing equipment configuration management under multi-core |
Also Published As
Publication number | Publication date |
---|---|
CN110543373B (en) | 2020-03-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4213582B2 (en) | Computer multitasking via virtual threads | |
US7506339B2 (en) | High performance synchronization of accesses by threads to shared resources | |
US9798595B2 (en) | Transparent user mode scheduling on traditional threading systems | |
KR100976280B1 (en) | Multi processor and multi thread safe message queue with hardware assistance | |
US7962923B2 (en) | System and method for generating a lock-free dual queue | |
US7406699B2 (en) | Enhanced runtime hosting | |
US9086911B2 (en) | Multiprocessing transaction recovery manager | |
US8769546B2 (en) | Busy-wait time for threads | |
KR100902977B1 (en) | Hardware sharing system and method | |
CN111459691A (en) | Read-write method and device for shared memory | |
CN110543373B (en) | Method for accessing kernel by user thread | |
US11645124B2 (en) | Program execution control method and vehicle control device | |
Michael et al. | Relative performance of preemption-safe locking and non-blocking synchronization on multiprogrammed shared memory multiprocessors | |
CN113110924A (en) | Universal multithreading task execution method, device, medium and equipment | |
CN116010040A (en) | Method, device and equipment for acquiring lock resources | |
CN109375990B (en) | Ring linked list method based on atomic operation | |
JPH07319716A (en) | Exclusive control system for resources of computer system | |
JP2019204387A (en) | Program execution control method and program converter | |
US20130166887A1 (en) | Data processing apparatus and data processing method | |
US7996848B1 (en) | Systems and methods for suspending and resuming threads | |
US20040103414A1 (en) | Method and apparatus for interprocess communications | |
CN115525403A (en) | Ownership of processing threads | |
Hövelmann et al. | Path Expressions Revisited | |
JPH03158936A (en) | Testing method for program | |
US20210279096A1 (en) | A System Implementing Multi-Threaded Applications |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |