[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN103345385A - Method for converting serial events into parallel events - Google Patents

Method for converting serial events into parallel events Download PDF

Info

Publication number
CN103345385A
CN103345385A CN2013103224974A CN201310322497A CN103345385A CN 103345385 A CN103345385 A CN 103345385A CN 2013103224974 A CN2013103224974 A CN 2013103224974A CN 201310322497 A CN201310322497 A CN 201310322497A CN 103345385 A CN103345385 A CN 103345385A
Authority
CN
China
Prior art keywords
event
data structure
thread
definition
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2013103224974A
Other languages
Chinese (zh)
Inventor
程卫双
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hanbang Gaoke Digital Technology Co Ltd
Original Assignee
Beijing Hanbang Gaoke Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hanbang Gaoke Digital Technology Co Ltd filed Critical Beijing Hanbang Gaoke Digital Technology Co Ltd
Priority to CN2013103224974A priority Critical patent/CN103345385A/en
Publication of CN103345385A publication Critical patent/CN103345385A/en
Pending legal-status Critical Current

Links

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The invention discloses a method for converting serial events into parallel events. By adopting the method, an event callback function is not blocked; and meanwhile, business logic which only can be executed in series in the past can be executed in parallel. The method comprises three steps of: defining the events; receiving and processing the events; and issuing the vents. Parallelization of the serial events is achieved by using a memory pool technology, a thread pool technology and business control logic.

Description

A kind of serial event converts the method for parallel event to
Technical field
The present invention relates to the technical field of computing machine, relate more specifically to the method that a kind of serial event converts parallel event to.
Background technology
We run into such problem through regular meeting when doing computer application development, need carry out some business logic codes when an Event triggered.When these business logic codes execution time, extremely weak point can be ignored, so do not have what problem.If these business logic codes execution time be in the time of can not ignoring, will the obstructing event call back function, and be that serial is carried out between the service logic, need carry out parallelization and handle.
Summary of the invention
Technology of the present invention is dealt with problems and is: overcome the deficiencies in the prior art, provide a kind of serial event of can the obstructing event call back function, simultaneously the service logic can only serial carried out can be performed concurrently to convert the method for parallel event to.
Technical solution of the present invention is: this serial event converts the method for parallel event to, comprises that event definition, event receive processing, three steps of event distribution, wherein:
(1) event definition step definition event data structure, event data structure comprise the data structure pointer of event type identification number, primitive event desired parameters composition, the data structure length that the primitive event desired parameters is formed, the reference count that event is distributed function pointer chained list, current event;
(2) event reception treatment step comprises:
(2.1) event queue of definition, an event queue monitor thread, a memory pool, a thread pool;
(2.2) receive the event desired parameters from event source;
(2.3) from memory pool, apply for the event data structure of internal memory and initialization step (1) according to the parameter size;
(2.4) event data structure is left in the event container;
(3) the event distributing step comprises:
(3.1) from event container, take out event data structure;
(3.2) utilize the thread pool technology that the initial parameter in the event data structure is distributed in the corresponding distribution function.
Because the event definition of this method, event receive and handle, three steps of event distribution, by using memory pool technique, thread pool technology and professional steering logic to realize the parallelization of serial event, successfully finishing the parallelization of serial event handles, even a lot of service logics consuming time are arranged in the event call-back function, as long as reasonable distribution distribution function, can not block this event call-back function yet, the service logic of can only serial carrying out can be performed concurrently.
Description of drawings
Fig. 1 is synoptic diagram according to a preferred embodiment of the present invention.
Embodiment
This serial event converts the method for parallel event to, comprises that event definition, event receive processing, three steps of event distribution, wherein:
(1) event definition step definition event data structure, event data structure comprise the data structure pointer of event type identification number, primitive event desired parameters composition, the data structure length that the primitive event desired parameters is formed, the reference count that event is distributed function pointer chained list, current event;
(2) event reception treatment step comprises:
(2.1) event queue of definition, an event queue monitor thread, a memory pool, a thread pool;
(2.2) receive the event desired parameters from event source;
(2.3) from memory pool, apply for the event data structure of internal memory and initialization step (1) according to the parameter size;
(2.4) event data structure is left in the event container;
(3) the event distributing step comprises:
(3.1) from event container, take out event data structure;
(3.2) utilize the thread pool technology that the initial parameter in the event data structure is distributed in the corresponding distribution function.
Briefly introduce memory pool and thread pool below.
Memory pool (Memory Pool) is a kind of Memory Allocation mode, specifically is disclosed in webpage:
http://www.cnblogs.com/bangerlee/archive/2011/08/31/2161421.html。Usually we are accustomed to directly using API application storage allocations such as new, malloc, and the shortcoming of doing like this is: because the memory block of applying for is big or small indefinite, can cause a large amount of memory fragmentations and and then reduction performance when frequent use.
Memory pool then is that first to file distributes memory block some, equal and opposite in direction (generally speaking) to give over to standby before real use internal memory.When new memory requirements, just from memory pool, tell a part of memory block, if the not enough new internal memory of continuation application again of memory block.A remarkable advantage of doing like this is to have avoided memory fragmentation as far as possible, makes Memory Allocation efficient get a promotion.
The memory pool implementation
At first provide the overall architecture of this scheme, mainly comprise these three structures of block, list and pool in the structure, the block structure comprises the pointer that points to the actual memory space, and forward direction and back can be formed doubly linked list to pointer by block; The chained list that free pointed free memory block is formed in the list structure, the chained list that the memory block during used pointed program is used is formed, the size value is the size of memory block, forms single-track link table between the list; The head and tail of pool structure record list chained list.
The internal memory tracking strategy
In this scheme, when carrying out Memory Allocation, will apply for 12 bytes, namely the memory size of actual application is required memory size+12 more.In 12 bytes of many applications, deposit corresponding list pointer (4 byte), used pointer (4 byte) and check code (4 byte) respectively.By such setting, we are easy to obtain list and the block at this piece internal memory place, and check code plays the effect whether rough inspection makes mistakes.
Internal memory application and release strategy
Application: according to the size of application internal memory, traversal list chained list checks whether there is the size that is complementary;
If have coupling size: check whether free is NULL
Free is NULL: use malloc/new application internal memory, and be placed on the afterbody of used indication chained list
Free is not NULL: a node of free indication chained list is removed, be positioned over the afterbody of used indication chained list
If there is no mate size: newly-built list, use malloc/new application internal memory, and be placed on the used indication chained list afterbody of this list
Return the memory headroom pointer
Discharge: according to the internal memory tracking strategy, obtain list pointer and used pointer, it is deleted from the chained list of used pointer indication, be positioned over free pointer chained list pointed
Analysis to scheme
The problem that proposes in contrast " memory pool design " joint, our scheme one has following characteristics:
● memory pool does not have memory block behind the program start, really carries out just taking over memory block management time internal memory application and the release to program;
● the application of this memory pool to arriving, do not limit applying for size, it is created chained list for each size value and carries out memory management;
● this scheme does not provide the function that limits the memory pool size
Binding analysis, it is as follows to draw this scheme application scenarios: the memory block size that program is applied for relatively more fixing (such as only applying for/discharge the internal memory of 1024bytes or 2048bytes), application and the frequency that discharges are consistent (because applying for discharging that I haven't seen you for ages takies too much internal memory, finally causing system crash) substantially.
Thread pool is exactly the sets of threads with a plurality of applied logics of one or more threads [circulation is carried out], specifically referring to webpage:
http://blog.chinaunix.net/uid-26912934-id-3188211.html。
When need to create thread pool? briefly, if thread need be created and destroy to an application frequently, and the time that task is carried out is very short, and the expense of bringing of thread creation and destruction just can not be ignored like this, also has been the chance of this appearance of thread pool at this moment.If thread creation is compared task execution time with the destruction time and can be ignored, then there is no need to have used thread pool.
Generally speaking, thread pool has following components:
1. finish one or more threads of main task.
2. the management thread that is used for management and running.
3. require the task queue of execution.
The job step of thread pool is roughly as follows:
1, creates also initialization thread pond.
2, add task to the task queue of thread pool.
3, the worker thread of thread pool will be carried out this task after finding in the task queue task is arranged.
4, destroy thread pool.
Because the event definition of this method, event receive and handle, three steps of event distribution, successfully finishing the parallelization of serial event handles, even a lot of service logics consuming time are arranged in the event call-back function, as long as reasonable distribution distribution function, can not block this event call-back function yet, the service logic of can only serial carrying out can be performed concurrently.
Preferably, the event distributing step also comprises:
(3.3) judge whether the event reference count that the distribution function is finished in the event data structure is zero, be then this event data structure to be recovered in the memory pool, otherwise the event reference count subtracts 1.
Network hard disk video recorder receives the audio, video data of web camera, and gives audio/video decoding equipment respectively after this audio, video data processed processing.Audio, video data memory device and sign in to terminal preview user on the network hard disk video recorder.Traditional way is when obtain audio, video data from the network call back function after, issues above-mentioned consumer successively.Do like this and can cause a problem, if that is exactly that to have indivedual consumers to handle the logic of audio, video data more consuming time, perhaps the consumer is more such as there being a plurality of terminal users to carry out video preview simultaneously, will go out the network call back function and get clogged and cause frame losing, also can cause above-mentioned different consumers' audio, video data bigger delay to occur.
In order to address the above problem, we need convert the serial event to parallel event, and specific implementation is as follows.
When call back function is network call back function and when having the network audio-video frame data, this method comprise event definition, event receive handle, three steps of event distribution, wherein:
(a) event definition step definition event data structure, this data structure comprises the event type identification number, pointer by audio/video frames, channel number, media stream type and thread context composition parameter structure, the length of argument structure body, audio frequency and video consumer's distribution function pointer chained list, the reference count of current event;
(b) event reception treatment step comprises:
(b.1) event queue of definition, an event queue monitor thread, a memory pool, a thread pool;
(b.2) from the network call back function, obtain audio/video frames, channel number, media stream type and thread context, these parameters are formed an argument structure body, according to the length of this structure and the size of distribution function pointer chained list length computation outgoing event data structure required memory, the reference count of event is initialized as distribution function pointer chained list length;
(b.3) apply for out that from memory pool this big or small internal memory is used for preserving this event data structure;
(b.4) utilizing event to add function interface is inserted into the pointer of this event data structure in the event queue;
(c) the event distributing step comprises:
(c.1) start an event monitor thread and come the real time monitoring event queue, if find to have in the event queue event to arrive, just take out this event data structure pointer, therefrom take out argument structure body and distribution function pointer chained list;
(c.2) utilize the thread pool technology successively distribution function and argument structure body composition task to be inserted in the work queue of thread pool inside (general thread pool can send idle thread to carry out this task);
Reference count when (c.3) task is finished in the event data structure subtracts 1(and when this reference count is zero the shared memory headroom of event data structure is recovered in the memory pool, and the initial parameter in the event data structure will be distributed to different consumers like this).
Those skilled in the relevant art can be easy to determine technical scheme and each technical characterictic of present principles based on example herein, and will be understood that the present invention can realize with each form of hardware, software, firmware, processor or their combination.Although in embodiment, provided accurate embodiment; but this is not that the present invention is done any pro forma restriction; it will be appreciated by those skilled in the art that; any simple modification, equivalent variations and modification that every foundation technical spirit of the present invention is done above embodiment all still belong to the protection domain of technical solution of the present invention.

Claims (3)

1. a serial event converts the method for parallel event to, it is characterized in that: comprise event definition, event receive handle, three steps of event distribution, wherein:
(1) event definition step definition event data structure, event data structure comprise the data structure pointer of event type identification number, primitive event desired parameters composition, the data structure length that the primitive event desired parameters is formed, the reference count that event is distributed function pointer chained list, current event;
(2) event reception treatment step comprises:
(2.1) event queue of definition, an event queue monitor thread, a memory pool, a thread pool;
(2.2) receive the event desired parameters from event source;
(2.3) from memory pool, apply for the event data structure of internal memory and initialization step (1) according to the parameter size;
(2.4) event data structure is left in the event container;
(3) the event distributing step comprises:
(3.1) from event container, take out event data structure;
(3.2) utilize the thread pool technology that the initial parameter in the event data structure is distributed in the corresponding distribution function.
2. serial event according to claim 1 converts the method for parallel event to, it is characterized in that: the event distributing step also comprises:
(3.3) judge whether the event reference count that the distribution function is finished in the event data structure is zero, be then this event data structure to be recovered in the memory pool, otherwise the event reference count subtracts 1.
3. serial event according to claim 2 converts the method for parallel event to, it is characterized in that: when call back function is network call back function and when having the network audio-video frame data, this method comprises that event definition, event receive processing, three steps of event distribution, wherein:
(a) event definition step definition event data structure, this data structure comprises the event type identification number, pointer by audio/video frames, channel number, media stream type and thread context composition parameter structure, the length of argument structure body, audio frequency and video consumer's distribution function pointer chained list, the reference count of current event;
(b) event reception treatment step comprises:
(b.1) event queue of definition, an event queue monitor thread, a memory pool, a thread pool;
(b.2) from the network call back function, obtain audio/video frames, channel number, media stream type and thread context, these parameters are formed an argument structure body, according to the length of this structure and the size of distribution function pointer chained list length computation outgoing event data structure required memory, the reference count of event is initialized as distribution function pointer chained list length;
(b.3) apply for out that from memory pool this big or small internal memory is used for preserving this event data structure;
(b.4) utilizing event to add function interface is inserted into the pointer of this event data structure in the event queue;
(c) the event distributing step comprises:
(c.1) start an event monitor thread and come the real time monitoring event queue, if find to have in the event queue event to arrive, just take out this event data structure pointer, therefrom take out argument structure body and distribution function pointer chained list;
(c.2) utilize the thread pool technology successively distribution function and argument structure body composition task to be inserted in the work queue of thread pool inside;
Reference count when (c.3) task is finished in the event data structure subtracts 1.
CN2013103224974A 2013-07-29 2013-07-29 Method for converting serial events into parallel events Pending CN103345385A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2013103224974A CN103345385A (en) 2013-07-29 2013-07-29 Method for converting serial events into parallel events

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2013103224974A CN103345385A (en) 2013-07-29 2013-07-29 Method for converting serial events into parallel events

Publications (1)

Publication Number Publication Date
CN103345385A true CN103345385A (en) 2013-10-09

Family

ID=49280183

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2013103224974A Pending CN103345385A (en) 2013-07-29 2013-07-29 Method for converting serial events into parallel events

Country Status (1)

Country Link
CN (1) CN103345385A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108206752A (en) * 2016-12-19 2018-06-26 北京视联动力国际信息技术有限公司 A kind of management method and device regarding networked devices
WO2018161844A1 (en) * 2017-03-10 2018-09-13 Huawei Technologies Co., Ltd. Lock-free reference counting
CN110347450A (en) * 2019-07-15 2019-10-18 北京一流科技有限公司 Multithread concurrent control system and its method
CN110515713A (en) * 2019-08-13 2019-11-29 北京安盟信息技术股份有限公司 A kind of method for scheduling task, equipment and computer storage medium
CN112040317A (en) * 2020-08-21 2020-12-04 海信视像科技股份有限公司 Event response method and display device
CN112492193A (en) * 2019-09-12 2021-03-12 华为技术有限公司 Method and equipment for processing callback stream

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1281613A (en) * 1997-10-07 2001-01-24 卡纳尔股份有限公司 Multithread data processor
US20010039594A1 (en) * 1999-02-03 2001-11-08 Park Britt H. Method for enforcing workflow processes for website development and maintenance
CN102902512A (en) * 2012-08-31 2013-01-30 浪潮电子信息产业股份有限公司 Multi-thread parallel processing method based on multi-thread programming and message queue

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1281613A (en) * 1997-10-07 2001-01-24 卡纳尔股份有限公司 Multithread data processor
US20010039594A1 (en) * 1999-02-03 2001-11-08 Park Britt H. Method for enforcing workflow processes for website development and maintenance
CN102902512A (en) * 2012-08-31 2013-01-30 浪潮电子信息产业股份有限公司 Multi-thread parallel processing method based on multi-thread programming and message queue

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108206752A (en) * 2016-12-19 2018-06-26 北京视联动力国际信息技术有限公司 A kind of management method and device regarding networked devices
WO2018161844A1 (en) * 2017-03-10 2018-09-13 Huawei Technologies Co., Ltd. Lock-free reference counting
CN110347450A (en) * 2019-07-15 2019-10-18 北京一流科技有限公司 Multithread concurrent control system and its method
CN110347450B (en) * 2019-07-15 2024-02-09 北京一流科技有限公司 Multi-stream parallel control system and method thereof
CN110515713A (en) * 2019-08-13 2019-11-29 北京安盟信息技术股份有限公司 A kind of method for scheduling task, equipment and computer storage medium
CN112492193A (en) * 2019-09-12 2021-03-12 华为技术有限公司 Method and equipment for processing callback stream
CN112492193B (en) * 2019-09-12 2022-02-18 华为技术有限公司 Method and equipment for processing callback stream
US11849213B2 (en) 2019-09-12 2023-12-19 Huawei Technologies Co., Ltd. Parallel preview stream and callback stream processing method and device
CN112040317A (en) * 2020-08-21 2020-12-04 海信视像科技股份有限公司 Event response method and display device
CN112040317B (en) * 2020-08-21 2022-08-09 海信视像科技股份有限公司 Event response method and display device

Similar Documents

Publication Publication Date Title
CN103345385A (en) Method for converting serial events into parallel events
US9720740B2 (en) Resource management in MapReduce architecture and architectural system
US10296386B2 (en) Processing element management in a streaming data system
US10678722B2 (en) Using a decrementer interrupt to start long-running hardware operations before the end of a shared processor dispatch cycle
WO2017080431A1 (en) Log analysis-based database replication method and device
US9262222B2 (en) Lazy initialization of operator graph in a stream computing application
US8996925B2 (en) Managing error logs in a distributed network fabric
US20140095506A1 (en) Compile-time grouping of tuples in a streaming application
WO2018176559A1 (en) Game control method and device
US20130166618A1 (en) Predictive operator graph element processing
US20160170813A1 (en) Technologies for fast synchronization barriers for many-core processing
CN103167222A (en) Nonlinear cloud editing system
CN103200350A (en) Nonlinear cloud editing method
US8935516B2 (en) Enabling portions of programs to be executed on system z integrated information processor (zIIP) without requiring programs to be entirely restructured
US20160062777A1 (en) Managing middleware using an application manager
CN105204811A (en) Multi-circuit control system and method
CN112199202B (en) Development method for expanding Kafka consumption capacity
CN105808585B (en) Method and device for processing streaming data
KR102144578B1 (en) Method and apparatus for executing an application based on dynamically loaded module
US20200285489A1 (en) Systems and methods for facilitating real-time analytics
JP2018538632A (en) Method and device for processing data after node restart
CN108848398B (en) Method, device, terminal and storage medium for distributing local barrage messages
US20200293294A1 (en) Specification and execution of real-time streaming applications
CN109947561A (en) A kind of virtual scene processing method, device and storage medium
CN112486815A (en) Application program analysis method and device, server and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20131009