[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

TW201317897A - Task scheduling and allocation for multi-core/many-core management framework and method thereof - Google Patents

Task scheduling and allocation for multi-core/many-core management framework and method thereof Download PDF

Info

Publication number
TW201317897A
TW201317897A TW100139601A TW100139601A TW201317897A TW 201317897 A TW201317897 A TW 201317897A TW 100139601 A TW100139601 A TW 100139601A TW 100139601 A TW100139601 A TW 100139601A TW 201317897 A TW201317897 A TW 201317897A
Authority
TW
Taiwan
Prior art keywords
core
work
application
service
cores
Prior art date
Application number
TW100139601A
Other languages
Chinese (zh)
Other versions
TWI442323B (en
Inventor
Pi-Cheng Hsiu
Der-Nien Lee
Tei-Wei Kuo
zhao-rong Lai
Original Assignee
Univ Nat Taiwan
Academia Sinica
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Taiwan, Academia Sinica filed Critical Univ Nat Taiwan
Priority to TW100139601A priority Critical patent/TWI442323B/en
Publication of TW201317897A publication Critical patent/TW201317897A/en
Application granted granted Critical
Publication of TWI442323B publication Critical patent/TWI442323B/en

Links

Landscapes

  • Multi Processors (AREA)

Abstract

A task scheduling and allocation for multi-core/many-core management framework and a method thereof are provided. Application cores and service cores are being assigned to a CPU which contained multi-core/many-core management framework to run application tasks and service tasks by corresponding cores, and provide inter-communication between application cores and service cores via an inter-process communication (IPC). Therefore, the efficiency of managing task blocking and preemption costs directly and indirectly in task scheduling and allocation may be achieved.

Description

用於多核心/眾核心的工作排程與分配管理架構及其方法Work scheduling and distribution management architecture and method for multi-core/core core

一種工作排程與分配管理架構及其方法,尤其是指一種應用工作與服務工作可分開在不同的指定核心來執行之用於多核心/眾核心的工作排程與分配管理架構及其方法。A work scheduling and distribution management architecture and method thereof, in particular, a work scheduling and distribution management architecture and method for a multi-core/core core that can be executed at different designated cores by application work and service work.

近年來,對於中央處理器已從單核心架構發展至多核心(multi-core)架構,在核心數量的增加亦由多核心架構發展至眾核心(many-core)架構,並且由於多核心/眾核心架構發展的迅速,系統工程師在設計系統時將會因為核心(core)的擴展面臨更大的挑戰。In recent years, the central processor has evolved from a single-core architecture to a multi-core architecture. The increase in the number of cores has also evolved from a multi-core architecture to a many-core architecture, and due to multi-core/core With the rapid development of the architecture, system engineers will face greater challenges in the design of the system due to the expansion of the core.

在多核心/眾核心架構下,核心會因為資源的共享,進而產生工作阻斷時間(blocking time),以使得部分核心閒置而降低系統效能,為了避免產生上述的問題,核心中的工作排程將是一個重要的部分。In the multi-core/core architecture, the core will generate the blocking time due to the sharing of resources, so that some cores are idle and the system performance is reduced. In order to avoid the above problems, the work schedule in the core is Will be an important part.

在設計系統時,有許多針對獨立工作固定或動態的優先權考量排程演算法已經被提出,例如:在1991年N. C. Audsley所提出的“Optimal Priority Assignment and Feasibility of Static Priority Tasks with Arbitrary Start Times”、在1982年J. Y.-T. Leung與J. Whitehead所提出的“On the Complexity of Fixed-Priority Scheduling of periodic,Real-Time Tasks”、在1973年C. L. Liu與J. W. Layland所提出的“Scheduling Algorithms for Multiprogramming in a Hard Real Time Environment”以及在1983年A. K. Monk所提出的“Fundamental Design Problems of Distributed Systems for the Hard Real Time Environment”…等。When designing a system, there are a number of priority-calculated scheduling algorithms for independent work that have been fixed or dynamic. For example, in 1991, NC Audsley proposed "Optimal Priority Assignment and Feasibility of Static Priority Tasks with Arbitrary Start Times". "On the Complexity of Fixed-Priority Scheduling of periodic, Real-Time Tasks" by JY-T. Leung and J. Whitehead in 1982, "Scheduling Algorithms for Multiprogramming" by CL Liu and JW Layland in 1973 In a Hard Real Time Environment" and "Fundamental Design Problems of Distributed Systems for the Hard Real Time Environment" proposed by AK Monk in 1983.

然而如果工作兼有資源分享的狀況,會產生優先權反轉(priority inversions)的現象,進而產生出不必要的阻斷成本(blocking costs),而為了解決上述的問題,有許多的工作同步協定被提出,例如:1991年T. P. Baker所提出的“Stack-Based Scheduling of Realtime Processes”、1990年M.-I. Chen與K.-J. Lin所提出的“Dynamic Priority Ceilings:A Concurrency Control Protocol for Real-Time System”以及1990年L. Sha、R. Rajkumar與J. P. Lehoczky所提出的“Priority Inheritance Protocol:An Approach to Real-Time Synchronization”…等,而在priority ceiling protocol(PCP)提出在單調速率(rate-monotonic)優先權分配的狀況下,管理優先權最高和優先權繼承以防止死結(deadlock)和優先權反轉的狀況發生。However, if the work has the status of resource sharing, there will be a phenomenon of priority inversions, which will generate unnecessary blocking costs. In order to solve the above problems, there are many work synchronization agreements. It was proposed, for example, "Stack-Based Scheduling of Realtime Processes" proposed by TP Baker in 1991, "Dynamic Priority Ceilings: A Concurrency Control Protocol for" by M.-I. Chen and K.-J. Lin in 1990. Real-Time System" and the "Priority Inheritance Protocol: An Approach to Real-Time Synchronization" proposed by L. Sha, R. Rajkumar and JP Lehoczky in 1990, etc., and the monotonic rate proposed in the priority ceiling protocol (PCP) Rate-monotonic) In the case of priority allocation, the management priority is highest and the priority inheritance is to prevent the occurrence of deadlock and priority reversal.

在多核心/眾核心架構下工作排程與工作同步的問題更為的複雜,因此透過動態工作移動(dynamic task migration)下的全局排程(global scheduling)演算法與靜態工作分配(static task allocation)下的分群排程演算法被提出來解決多核心/眾核心架構下所產生的上述問題。The problem of scheduling work and synchronization in a multi-core/core architecture is more complicated. Therefore, global scheduling algorithms and static task allocation through dynamic task migration. The cluster scheduling algorithm underneath is proposed to solve the above problems caused by the multi-core/core architecture.

而multiprocessor priority ceiling protocol(MPCP)將優先權最高的觀念延伸到多核心/眾核心架構下,資源在全域旗幟(global semaphore)的幫助下管理遠端阻斷與優先權反轉的狀況,並且在2010年B. Brandenburg與J. Anderson所提出的“Optimality Results for Multiprocessor Real-Time Locking”更指出了工作阻斷次數與多核心/眾核心架構排程下優先權反轉的關係。The multiprocessor priority ceiling protocol (MPCP) extends the concept of the highest priority to the multi-core/core architecture, and the resources manage the remote blocking and priority reversal with the help of the global semaphore. In 2010, B. Brandenburg and J. Anderson proposed "Optimality Results for Multiprocessor Real-Time Locking" to point out the relationship between the number of job blocking and the priority reversal under multi-core/core architecture scheduling.

以一個實際的例子來加以說明現有技術的工作排程,請參考「第1圖」所示,「第1圖」繪示為習知技術的工作排程時序圖。To illustrate the prior art work schedule with a practical example, please refer to "1", and "1" is a working schedule diagram of the prior art.

以具有兩個核心的中央處理器作為舉例說明,假設第二工作32在第一核心41上執行,假設第一工作31以及第三工作33在第二核心42上執行,首先,第二工作32先在第一核心41上執行,接著第三工作33在第二核心42上執行,在時間t 0時,在第二核心42上執行的第三工作33成功進入臨界區間(Critical Section),而在時間t 1時,在第一核心31上執行的第二工作32亦需要進入臨界區間,但由於第二核心42上執行的第三工作33已經成功進入臨界區間,此時(時間t 1)在第一核心41上執行的第二工作32將會暫停執行,以等待第三工作33執行臨界區間結束,而此時(時間t 1)的第一核心31會進入閒置狀態。Taking a central processor having two cores as an example, it is assumed that the second work 32 is executed on the first core 41, assuming that the first work 31 and the third work 33 are performed on the second core 42, first, the second work 32 first performed on a first core 41, and then a third work 33 on the second execution core 42, at time t 0, third 33 performs work on the second core 42 successfully enter the critical section (critical section), and At time t 1 , the second work 32 performed on the first core 31 also needs to enter the critical interval, but since the third work 33 performed on the second core 42 has successfully entered the critical interval, at this time (time t 1 ) The second work 32 executed on the first core 41 will suspend execution to wait for the third work 33 to execute the end of the critical interval, at which time the first core 31 (time t 1 ) will enter the idle state.

在時間t 2時,具有高優先權的第一工作31在第二核心42上被執行的同時(時間t 2),第二核心42上執行的第三工作33將會暫停執行,第二核心42會開始執行高優先權的第一工作31,直到時間t 3時第二核心42在執行第一工作31完成之後,第二核心42才會再繼續執行第三工作33。At time t 2 , while the first job 31 having the high priority is executed on the second core 42 (time t 2 ), the third job 33 executed on the second core 42 will be suspended, the second core 42 will begin a first high priority work 31, up to time t 42 after the execution of the second core 31 to complete the first operation, the second core 42 will then continue to work 33 3 third.

在時間t 4時,在第二核心42上執行的第三工作33在釋放臨界區間後,在此時(時間t 4)第一核心41上執行的第二工作32才能成功進入臨界區間,第一核心41才能再繼續執行第二工作32。At time t 4 , after the third work 33 performed on the second core 42 releases the critical interval, the second work 32 performed on the first core 41 at this time (time t 4 ) can successfully enter the critical interval, A core 41 can continue to perform the second work 32.

綜上所述,可知先前技術中長期以來一直存在先前技術具有因為工作搶占進而造成較長時間的工作阻斷問題,因此有必要提出改進的技術手段,來解決此一問題。In summary, it can be seen that the prior art has long existed in the prior art because of work seizure and thus a long time work blocking problem, so it is necessary to propose an improved technical means to solve this problem.

有鑒於先前技術存在先前技術具有因為工作搶占進而造成較長時間的工作阻斷問題,本發明遂揭露一種用於多核心/眾核心的工作排程與分配管理架構及其方法,其中:In view of the prior art prior art having a work blocking problem caused by work preemption for a long time, the present invention discloses a work scheduling and distribution management architecture for a multi-core/core core and a method thereof, wherein:

本發明所揭露的用於多核心/眾核心的工作排程與分配管理架構,其包含:中央處理器(Central Processing Unit,CPU),中央處理器具有多核心(Multi-Core)/眾核心(Many-Core),中央處理器更包含:至少一服務核心(Service Core)以及多個應用核心(Application Core)。The work scheduling and distribution management architecture for multi-core/core core disclosed in the present invention comprises: a central processing unit (CPU), and the central processing unit has a multi-core/multi-core ( Many-Core), the central processor further includes: at least one Service Core and multiple Application Cores.

至少一服務核心是於多核心/眾核心中指定至少一核心為至少一服務核心,服務核心是被指定執行服務工作(Service Task);多個應用核心是未被指定為服務核心的核心為所述應用核心,應用核心是被指定執行應用工作(Application Task);其中,當應用核心其中之一執行的應用工作需要進入服務工作時,應用核心以進程間通訊(Inter-Process Communication,IPC)方式發出服務執行請求,以請求服務核心其中之一執行服務工作,且應用工作會暫停執行以使應用核心進入閒置狀態;當服務核心執行服務工作完成後,再透過進程間通訊返回應用核心以再繼續執行應用工作。At least one service core is specified in the multi-core/core core, at least one core is at least one service core, the service core is designated to perform a service task (Service Task); and multiple application cores are cores that are not designated as service cores. The application core, the application core is specified to execute the application task (Application Task); wherein, when the application work performed by one of the application cores needs to enter the service work, the application core uses Inter-Process Communication (IPC) mode. Issue a service execution request to request one of the service cores to perform the service work, and the application work will suspend execution to bring the application core into an idle state; when the service core performs the service work, return to the application core through the interprocess communication to continue Perform application work.

本發明所揭露的用於多核心/眾核心的工作排程與分配管理方法,其包含下列步驟:The method for scheduling and distributing management of a multi-core/core core disclosed in the present invention comprises the following steps:

首先,在具有多核心(Multi-Core)/眾核心(Many-Core)的中央處理器(Central Processing Unit,CPU)中指定至少一核心為服務核心(Service Core),其餘未被指定為服務核心的核心為應用核心(Application Core):其中,服務核心是被指定執行服務工作(Service Task),應用核心是被指定執行應用工作(Application Task);接著,當應用核心其中之一執行的應用工作需要進入服務工作時,應用核心以進程間通訊(Inter-Process Communication,IPC)方式發出服務執行請求,以請求服務核心其中之一執行服務工作,且應用工作會暫停執行以使應用核心進入閒置狀態;最後,當服務核心執行服務工作完成後,再透過進程間通訊返回應用核心以再繼續執行應用工作。First, at least one core is designated as a Service Core in a Central Processing Unit (CPU) with a Multi-Core/Many-Core, and the rest are not designated as service cores. The core of the application is the Application Core: where the service core is specified to execute the Service Task, the application core is specified to execute the Application Task; then, when one of the application cores performs the application work When it is required to enter the service work, the application core sends a service execution request in an Inter-Process Communication (IPC) manner to request one of the service cores to perform the service work, and the application work is suspended to make the application core enter the idle state. Finally, when the service core execution service is completed, it returns to the application core through interprocess communication to continue the application work.

本發明所揭露的裝置與方法如上,與先前技術之間的差異在於本發明透過在具有多核心/眾核心的中央處理器中指定核心分別為應用核心與服務核心,以使得應用工作與服務工作可分開在不同的指定核心來執行,並透過進程間通訊方式達到核心之間的執行請求,進而讓較少的核心需要在特定時間內完成工作,更能有效的管理工作阻斷的情形。The apparatus and method disclosed by the present invention are as above, and the difference from the prior art is that the present invention specifies application cores and service cores in a central processing unit having multiple cores/cores to enable application work and service work. It can be executed separately in different designated cores, and the execution requests between the cores can be reached through inter-process communication, so that fewer cores need to complete the work within a certain time, and the work blocking situation can be managed more effectively.

透過上述的技術手段,本發明可以達成工作排程與分配時能有效管理直接或間接的工作阻斷與工作搶占的技術功效。Through the above technical means, the present invention can achieve the technical effect of effectively managing direct or indirect work interruption and work preemption during work scheduling and distribution.

以下將配合圖式及實施例來詳細說明本發明的實施方式,藉此對本發明如何應用技術手段來解決技術問題並達成技術功效的實現過程能充分理解並據以實施。The embodiments of the present invention will be described in detail below with reference to the drawings and embodiments, so that the application of the technical means to solve the technical problems and achieve the technical effects can be fully understood and implemented.

以下首先要說明本發明所揭露的用於多核心/眾核心的工作排程與分配管理架構,並請參考「第2圖」所示,「第2圖」繪示為本發明用於多核心/眾核心的工作排程與分配管理的裝置示意圖。The following is a description of the work scheduling and distribution management architecture for multi-core/cores disclosed in the present invention, and please refer to "FIG. 2", and "FIG. 2" is shown as a multi-core for the present invention. / The core of the work scheduling and distribution management device schematic.

本發明所揭露的用於多核心/眾核心的工作排程與分配管理架構,其包含:中央處理器(Central Processing Unit,CPU)10,中央處理器10具有多核心(Multi-Core)/眾核心(Many-Core)11,中央處理器10更包含:至少一服務核心(Service Core)111以及多個應用核心(Application Core)112。The work scheduling and distribution management architecture for multi-core/core core disclosed in the present invention comprises: a central processing unit (CPU) 10 having a multi-core (Multi-Core)/publicity The core processor (Many-Core) 11 further includes: at least one service core 111 and a plurality of application cores 112.

服務核心111是於多核心/眾核心11中指定至少一核心11為至少一服務核心111,服務核心111是被指定執行服務工作(Service Task)21,在本發明所述的服務工作21例如像是臨界區間(Critical Section)的執行。The service core 111 is to specify at least one core 11 in the multi-core/core core 11 as at least one service core 111, and the service core 111 is designated to perform a service task 21, such as the service work 21 described in the present invention. It is the execution of the Critical Section.

多個應用核心112是未被指定為服務核心111的核心11為所述應用核心112,應用核心112是被指定執行應用工作(Application Task)22,在本發明所述的應用工作22例如像是應用程式的執行。The plurality of application cores 112 are the core 11 that is not designated as the service core 111, and the application core 112 is designated to execute an application task 22, and the application work 22 described in the present invention is, for example, Application execution.

值得注意的是,上述的服務工作21以及應用工作22是以固定優先權(Fixed-Priority)方式決定每一個服務工作21以及應用工作22的優先權,以固定優先權方式決定每一個服務工作21以及應用工作22的優先權請參考現有技術所述,在此不再進行贅述。It is worth noting that the above service work 21 and application work 22 determine the priority of each service work 21 and application work 22 in a fixed-priority manner, and determine each service work in a fixed priority manner. For the priority of the application work 22, please refer to the prior art, and details are not described herein.

服務核心111以及應用核心112即會依據服務工作21以及應用工作22的優先權來決定服務工作21以及應用工作22的排程與分配,且應用工作22的序列是會被搶占,即具有高優先權的服務工作21以及應用工作22會優先被服務核心111以及應用核心112執行,亦即當某一個應用工作22(此時即表示該應用工作具有最高優先權)暫停被執行時,應用核心112可執行次一個高優先權的應用工作22。The service core 111 and the application core 112 determine the scheduling and allocation of the service work 21 and the application work 22 according to the priority of the service work 21 and the application work 22, and the sequence of the application work 22 is preempted, that is, has a high priority. The service work 21 and the application work 22 of the right are preferentially executed by the service core 111 and the application core 112, that is, when a certain application work 22 (in this case, the application work has the highest priority) is suspended, the application core 112 is applied. A high priority application job 22 can be executed.

當應用核心112其中之一中執行的應用工作22需要進入服務工作21時,即為該應用核心112中執行的應用工作22需要執行臨界區間時,應用核心112會以進程間通訊(Inter-Process Communication,IPC)方式發出服務執行請求,以請求服務核心111其中之一執行服務工作21(即執行臨界區間),上述的進程間通訊方式又可包含遠端程序呼叫(Remote Procedure Call,RPC)方式、訊號(Signal)方式、訊息傳遞(Message Passing)方式、共享記憶體(Shared Memory)方式、記憶體對應檔案(Memory-Mapped Files)方式、管道(Pipe)方式、套接(Socket)方式,在此僅為舉例說明之,並不以此侷限本發明的應用範疇。When the application work 22 executed in one of the application cores 112 needs to enter the service work 21, that is, when the application work 22 executed in the application core 112 needs to execute a critical interval, the application core 112 performs inter-process communication (Inter-Process). The Communication, IPC) method issues a service execution request to request one of the service cores 111 to perform the service work 21 (ie, the execution critical interval), and the inter-process communication method may further include a Remote Procedure Call (RPC) mode. , Signal mode, Message Passing mode, Shared Memory mode, Memory-Mapped Files mode, Pipe mode, Socket mode, This is for illustrative purposes only and is not intended to limit the scope of application of the invention.

在應用核心112以進程間通訊方式發出服務執行請求,以請求服務核心111其中之一執行服務工作21時,該應用工作22會暫停執行以使該應用核心112進入閒置狀態,而該服務工作21會加到該服務核心111的服務工作的序列中,在該服務核心111中已經準備好的服務工作21應依照優先權排程,並且當沒有任何的服務工作21被請求執行時,服務核心111即會進入閒置狀態。When the application core 112 issues a service execution request in an interprocess communication manner to request one of the service cores 111 to perform the service work 21, the application work 22 suspends execution to bring the application core 112 into an idle state, and the service work 21 Will be added to the sequence of service work of the service core 111, the service work 21 already prepared in the service core 111 should be scheduled according to priority, and when no service work 21 is requested to be executed, the service core 111 Will enter the idle state.

上述該服務工作21的優先權應考量該應用工作22的優先權,即當該應用工作22的優先權為高優先權時,該服務工作21的優先權亦為高優先權,當該應用工作22的優先權為低優先權時,該服務工作21的優先權亦為低優先權。The priority of the service work 21 described above should take into account the priority of the application work 22, that is, when the priority of the application work 22 is a high priority, the priority of the service work 21 is also a high priority when the application works. When the priority of 22 is low priority, the priority of the service work 21 is also low priority.

接著,當該服務核心111執行服務工作21完成後,即可再透過進程間通訊返回該應用核心112以再繼續執行該應用工作22,藉以在服務核心111以及應用核心112在工作排程與分配時能有效管理直接或間接的工作阻斷與工作搶占成本。Then, after the service core 111 performs the service work 21, the application core 112 can be returned to the application core 112 through the inter-process communication to continue the application work 22, so that the service core 111 and the application core 112 are scheduled and distributed in the work. It can effectively manage direct or indirect work blocking and work preemption costs.

接著,以下將以一個實施例來解說本發明的運作方式及流程,並請同時參考「第2圖」、「第3圖」以及「第4圖」所示,「第3圖」繪示為本發明用於多核心/眾核心的工作排程與分配管理方法流程圖;「第4圖」繪示為本發明用於多核心/眾核心的工作排程與分配管理時序圖。Next, the operation mode and flow of the present invention will be explained below by way of an embodiment, and please refer to "Fig. 2", "3rd picture" and "4th picture", and "3rd picture" is shown as The present invention is applied to a multi-core/core core work scheduling and distribution management method flow chart; "Fig. 4" is a timing diagram of the work scheduling and distribution management for the multi-core/core core of the present invention.

在本實施例中是以具有三個核心的中央處理器作為舉例說明,本發明並不以此為限制,在具有三核心的中央處理器中指定一個核心為服務核心111,並且未被指定為服務核心111的核心為第一應用核心1121以及第二應用核心1122(步驟110)。In the present embodiment, a central processing unit having three cores is exemplified, and the present invention is not limited thereto. A core is designated as a service core 111 in a central processing unit having three cores, and is not designated as The core of the service core 111 is the first application core 1121 and the second application core 1122 (step 110).

在此假設第二應用工作222在第一應用核心1121上執行,假設第一應用工作221以及第三應用工作223在第二應用核心1122上執行,以及假設服務工作21是在服務核心111上執行(步驟110)。It is assumed here that the second application work 222 is executed on the first application core 1121, assuming that the first application work 221 and the third application work 223 are executed on the second application core 1122, and that the service work 21 is executed on the service core 111. (Step 110).

值得注意的是,第二應用工作222以及第三應用工作223具有相同的全域旗幟(global semaphore),由於第二應用工作222以及第三應用工作223具有相同的全域旗幟,故第二應用工作222以及第三應用工作223會請求相同的服務工作21(即臨界區間)來執行,但是為了說明方便,以下將以第二服務工作212來表示第二應用工作222所請求執行的服務工作21,以及以第三服務工作213來表示第三應用工作223所請求執行的服務工作21。It should be noted that the second application work 222 and the third application work 223 have the same global semaphore. Since the second application work 222 and the third application work 223 have the same global flag, the second application work 222 And the third application work 223 will request the same service work 21 (ie, critical section) to execute, but for convenience of explanation, the service work 21 requested by the second application work 222 will be represented by the second service work 212, and The service work 21 requested by the third application job 223 is represented by the third service job 213.

首先,第二應用工作222先在第一應用核心1121上執行,接著第三應用工作223在第二應用核心1122上執行,在時間t 0時,在第二應用核心1122上執行的第三應用工作223要執行第三服務工作213,第二應用核心1122即會以進程間通訊方式發出服務執行請求至服務核心111,以請求服務核心111執行第三服務工作213(即臨界區間),此時(時間t 0)在第二應用核心1122上執行的第三應用工作223會暫停執行,而此時(時間t 0)的第二應用核心1122亦會進入閒置狀態(步驟120)。First, the second application work 222 is first executed on the first application core 1121, then the third application work 223 is executed on the second application core 1122, and at the time t 0 , the third application is executed on the second application core 1122. The work 223 is to perform the third service work 213, and the second application core 1122 sends a service execution request to the service core 111 in an interprocess communication manner to request the service core 111 to perform the third service work 213 (ie, the critical interval). (time t 0) of the third application work performed on the second core application 1,122,223, would be suspended at a time (time t 0) of the second core application 1122 will enter an idle state (step 120).

接著,在時間t 1時,在第一應用核心1121上執行的第二應用工作222要執行第二服務工作212,第一應用核心1121即會以進程間通訊方式發出服務執行請求至服務核心111,以請求服務核心111執行第二服務工作212(即臨界區間),然而此時(時間t 1)的服務核心111正在為第三應用工作223執行第三服務工作213,而第二應用工作222所請求的第二服務工作212便會先進入服務核心111的行程序列中等待執行,此時(時間t 1)在第一應用核心1121上執行的第二應用工作222會暫停執行,而此時(時間t 1)的第一應用核心1121亦會進入閒置狀態(步驟120)。Next, at time t 1, a second application work executing on the first application to execute a second core 1,121,222 service work 212, a first core application 1121 will issue a service request to the service execution core 111 to inter-process communication The request service core 111 performs the second service work 212 (ie, the critical interval), but at this time (the time t 1 ) the service core 111 is performing the third service work 213 for the third application work 223, and the second application work 222 The requested second service work 212 will first enter the service sequence of the service core 111 and wait for execution. At this time (the time t 1 ), the second application work 222 executed on the first application core 1121 will be suspended. The first application core 1121 (time t 1 ) also enters an idle state (step 120).

接著,在時間t 2時,第一應用工作221需要開始執行,而此時(時間t 2)第二應用核心1122亦處於閒置狀態,故第一應用工作221即可開始於第二應用核心1122中執行。Then, at time t 2 , the first application work 221 needs to be started, and at this time (time t 2 ) the second application core 1122 is also in an idle state, so the first application work 221 can start at the second application core 1122. Executed in.

接著,在時間t 3時,第三服務工作213已被執行完成,此時(時間t 3)服務核心111即會以進程間通訊方式返回第二應用核心1122,以使執行於第二應用核心1122上的第三應用工作223可以再繼續執行,但是由於此時(時間t 3)的第二應用核心1122正在執行第一應用工作221(正在執行的第一應用工作221為高優先權),因此,第三應用工作223是需要等待到時間t 5時第一應用工作221執行完成後,第三應用工作223才會恢復執行(步驟130)。Then, at time t 3, the third service work 213 has been executed, this time (time t 3) core service 111 will then return a second application core 1122 to inter-process communication, to enable the core to execute the second application The third application work 223 on 1122 can continue to execute, but since the second application core 1122 at this time (time t 3 ) is executing the first application work 221 (the first application work 221 being executed is a high priority), Therefore, the third application work 223 is that the third application work 223 is resumed after the execution of the first application work 221 is completed until time t 5 (step 130).

而在時間t 3時,第三服務工作213已被執行完成,緊接著會服務核心111會執行在行程排序中的第二服務工作212,直到時間t 4時,第二服務工作212已被執行完成,時此(時間t 4)的服務核心111即會以進程間通訊方式返回第一應用核心1121,以使執行於第一應用核心1121上的第二應用工作222可以再繼續執行(步驟130)。At time t 3, the third service work 213 has been executed, will be followed by a second serving core service 111 performs the working stroke ordering 212, until time t 4, the second service 212 work has been performed Upon completion, the service core 111 at this time (time t 4 ) will return to the first application core 1121 in an inter-process communication manner, so that the second application work 222 executed on the first application core 1111 can be resumed (step 130). ).

藉以在服務核心111以及應用核心112在工作排程與分配時能有效管理直接或間接的工作阻斷與工作搶占成本。Therefore, the service core 111 and the application core 112 can effectively manage direct or indirect work blocking and work preemption costs during work scheduling and distribution.

綜上所述,可知本發明與先前技術之間的差異在於本發明透過在具有多核心/眾核心的中央處理器中指定核心分別為應用核心與服務核心,以使得應用工作與服務工作可分開在不同的指定核心來執行,並透過進程間通訊方式達到核心之間的執行請求,進而讓較少的核心需要在特定時間內完成工作,更能有效的管理工作阻斷的情形。In summary, it can be seen that the difference between the present invention and the prior art is that the present invention separates the application core from the service core by designating the cores in the central processing unit having multiple cores/cores to separate the application work from the service work. Execute at different designated cores and achieve execution requests between cores through inter-process communication, so that fewer cores need to complete work within a certain time, and it is more effective to manage the situation of work blocking.

藉由此一技術手段可以來解決先前技術所存在先前技術具有因為工作搶占進而造成較長時間的工作阻斷問題,進而達成工作排程與分配時能有效管理直接或間接的工作阻斷與工作搶占的技術功效。The technical solution can solve the problem that the prior art existing in the prior art has a long-term work blocking problem due to job preemption, thereby effectively managing direct or indirect work blocking and work when the work scheduling and distribution are achieved. The technical effect of preemption.

雖然本發明所揭露的實施方式如上,惟所述的內容並非用以直接限定本發明的專利保護範圍。任何本發明所屬技術領域中具有通常知識者,在不脫離本發明所揭露的精神和範圍的前提下,可以在實施的形式上及細節上作些許的更動。本發明的專利保護範圍,仍須以所附的申請專利範圍所界定者為準。While the embodiments of the present invention have been described above, the above description is not intended to limit the scope of the invention. Any changes in the form and details of the embodiments may be made without departing from the spirit and scope of the invention. The scope of the invention is to be determined by the scope of the appended claims.

10...中央處理器10. . . CPU

11...核心11. . . core

111...服務核心111. . . Service core

112...應用核心112. . . Application core

1121...第一應用核心1121. . . First application core

1122...第二應用核心1122. . . Second application core

21...服務工作twenty one. . . Service work

212...第二服務工作212. . . Second service work

213...第三服務工作213. . . Third service work

22...應用工作twenty two. . . Application work

221...第一應用工作221. . . First application work

222...第二應用工作222. . . Second application work

223...第三應用工作223. . . Third application work

31...第一工作31. . . First job

32...第二工作32. . . Second job

33...第三工作33. . . Third job

41...第一核心41. . . First core

42...第二核心42. . . Second core

t 0...時間 t 0 . . . time

t 1...時間 t 1 . . . time

t 2...時間 t 2 . . . time

t 3...時間 t 3 . . . time

t 4...時間 t 4 . . . time

t 5...時間 t 5 . . . time

步驟110 在具有多核心/眾核心的中央處理器中指定至少一核心為服務核心,其餘未被指定為服務核心的核心為應用核心,其中,服務核心是被指定執行服務工作,應用核心是被指定執行應用工作Step 110: Specify at least one core as a service core in a central processing unit having multiple cores/cores, and the remaining cores not designated as service cores are application cores, wherein the service core is designated to perform service work, and the application core is Specify to perform application work

步驟120 當應用核心其中之一中執行的應用工作需要進入服務工作時,應用核心以進程間通訊方式發出服務執行請求,以請求服務核心其中之一執行服務工作,且應用工作會暫停執行以使應用核心進入閒置狀態Step 120: When the application work performed in one of the application cores needs to enter the service work, the application core issues a service execution request in an interprocess communication manner to request one of the service cores to perform the service work, and the application work is suspended to perform Application core enters idle state

步驟130 當服務核心執行服務工作完成後,再透過進程間通訊返回應用核心以再繼續執行應用工作Step 130: After the service core execution service is completed, return to the application core through inter-process communication to continue the application work.

第1圖繪示為習知技術的工作排程時序圖。FIG. 1 is a timing diagram of a work schedule of the prior art.

第2圖繪示為本發明用於多核心/眾核心的工作排程與分配管理的裝置示意圖。FIG. 2 is a schematic diagram of an apparatus for multi-core/core core work scheduling and distribution management according to the present invention.

第3圖繪示為本發明用於多核心/眾核心的工作排程與分配管理方法流程圖。FIG. 3 is a flow chart showing a method for scheduling and distributing management of a multi-core/core core according to the present invention.

第4圖繪示為本發明用於多核心/眾核心的工作排程與分配管理時序圖。FIG. 4 is a timing diagram of the work scheduling and allocation management for the multi-core/core core of the present invention.

10...中央處理器10. . . CPU

11...核心11. . . core

111...服務核心111. . . Service core

112...應用核心112. . . Application core

21...服務工作twenty one. . . Service work

22...應用工作twenty two. . . Application work

Claims (10)

一種用於多核心/眾核心的工作排程與分配管理架構,其包含:一中央處理器(Central Processing Unit,CPU),該中央處理器具有多核心(Multi-Core)/眾核心(Many-Core),該中央處理器更包含:至少一服務核心(Service Core),於多核心/眾核心中指定至少一核心為該至少一服務核心,服務核心是被指定執行服務工作(Service Task);及多個應用核心(Application Core),未被指定為服務核心的核心為所述應用核心,應用核心是被指定執行應用工作(Application Task);其中,當應用核心其中之一執行的應用工作需要進入服務工作時,該應用核心以進程間通訊(Inter-Process Communication,IPC)方式發出服務執行請求,以請求服務核心其中之一執行服務工作,且該應用工作會暫停執行以使該應用核心進入閒置狀態;及當該服務核心執行服務工作完成後,再透過進程間通訊返回該應用核心以再繼續執行該應用工作。A work scheduling and distribution management architecture for a multi-core/core core, comprising: a Central Processing Unit (CPU) having a Multi-Core/Many Core (Many- Core), the central processing unit further includes: at least one service core, at least one core is specified as the at least one service core in the multi-core/the core, and the service core is designated to perform a service task; And a plurality of application cores, the core that is not designated as the service core is the application core, and the application core is specified to execute an application task; wherein, when one of the application cores performs an application work, When entering the service work, the application core sends a service execution request in an Inter-Process Communication (IPC) manner to request one of the service cores to perform the service work, and the application work is suspended to enable the application core to enter. Idle state; and when the service core execution service is completed, return to the application core through interprocess communication to continue The application work. 如申請專利範圍第1項所述的用於多核心/眾核心的工作排程與分配管理架構,其中該服務工作是指臨界區間(Critical Section)的執行。The work scheduling and distribution management architecture for multi-core/core cores as described in claim 1 of the patent application, wherein the service work refers to the execution of a critical section. 如申請專利範圍第1項所述的用於多核心/眾核心的工作排程與分配管理架構,其中服務工作以及應用工作是以固定優先權(Fixed-Priority)方式決定每一個服務工作以及應用工作的優先權,並且當應用核心其中之一中執行的應用工作需要進入服務工作時,該服務工作的優先權應考量該應用工作的優先權。The work scheduling and distribution management architecture for multi-core/core cores as described in claim 1, wherein the service work and the application work determine each service work and application in a fixed-priority manner. The priority of the work, and when the application work performed in one of the application cores needs to enter the service work, the priority of the service work should take into account the priority of the application work. 如申請專利範圍第3項所述的用於多核心/眾核心的工作排程與分配管理架構,其中每一個應用核心中執行的應用工作會依照優先權排程與分配,且應用工作的序列是會被搶占,當應用核心其中之一中執行的應用工作需要進入服務工作時,該服務工作會加到服務工作的序列中,已經準備好的服務工作應依照優先權排程。The work scheduling and distribution management architecture for multi-core/core cores as described in claim 3, wherein the application work performed in each application core is scheduled according to priority, and the sequence of application work It will be preempted. When the application work performed in one of the application cores needs to enter the service work, the service work will be added to the service work sequence, and the prepared service work should be scheduled according to the priority. 如申請專利範圍第1項所述的用於多核心/眾核心的工作排程與分配管理架構,其中進程間通訊包含有遠端程序呼叫(Remote Procedure Call,RPC)、訊號(Signal)、訊息傳遞(Message Passing)、共享記憶體(Shared Memory)、記憶體對應檔案(Memory-Mapped Files)、管道(Pipe)、套接(Socket)。The work scheduling and distribution management architecture for multi-core/core cores as described in claim 1, wherein the inter-process communication includes a remote procedure call (RPC), a signal (Signal), and a message. Message Passing, Shared Memory, Memory-Mapped Files, Pipes, Sockets. 一種用於多核心/眾核心的工作排程與分配管理方法,其包含下列步驟:在具有多核心(Multi-Core)/眾核心(Many-Core)的中央處理器(Central Processing Unit,CPU)中指定至少一核心為服務核心(Service Core),其餘未被指定為服務核心的核心為應用核心(Application Core):其中,服務核心是被指定執行服務工作(Service Task),應用核心是被指定執行應用工作(Application Task);當應用核心其中之一執行的應用工作需要進入服務工作時,該應用核心以進程間通訊(Inter-Process Communication,IPC)方式發出服務執行請求,以請求服務核心其中之一執行服務工作,且該應用工作會暫停執行以使該應用核心進入閒置狀態;及當該服務核心執行服務工作完成後,再透過進程間通訊返回該應用核心以再繼續執行該應用工作。A work scheduling and distribution management method for a multi-core/core core, comprising the following steps: in a Multi-Core/Many-Core Central Processing Unit (CPU) At least one core is specified as the Service Core, and the rest is not designated as the application core. The core of the service is specified to execute the Service Task, and the application core is specified. Execute the application task (Application Task); when the application work performed by one of the application cores needs to enter the service work, the application core sends a service execution request in the inter-process communication (IPC) manner to request the service core. One performs the service work, and the application work is suspended to make the application core enter the idle state; and when the service core performs the service work, the application core is returned to the application core through the inter-process communication to continue the application work. 如申請專利範圍第6項所述的用於多核心/眾核心的工作排程與分配管理方法,其中服務核心是被指定執行服務工作步驟中的服務工作是指臨界區間(Critical Section)的執行。The work scheduling and distribution management method for multi-core/core cores as described in claim 6 of the patent application scope, wherein the service core is specified to perform the service work step, and the service work refers to the execution of the critical section (Critical Section). . 如申請專利範圍第6項所述的用於多核心/眾核心的工作排程與分配管理方法,其中服務核心是被指定執行服務工作,應用核心是被指定執行應用工作步驟的服務工作以及應用工作是以固定優先權(Fixed-Priority)方式決定每一個服務工作以及應用工作的優先權,並且當應用核心其中之一中執行的應用工作需要進入服務工作時,該服務工作的優先權應考量該應用工作的優先權。The work scheduling and distribution management method for multi-core/core cores as described in claim 6 wherein the service core is designated to perform service work, and the application core is service work and application designated to perform application work steps. Work is to determine the priority of each service work and application work in a fixed-priority manner, and when the application work performed in one of the application cores needs to enter the service work, the priority of the service work should be considered. The priority of the application work. 如申請專利範圍第8項所述的用於多核心/眾核心的工作排程與分配管理方法,其中每一個應用核心中執行的應用工作會依照優先權排程與分配,且應用工作的序列是會被搶占,當應用核心其中之一中執行的應用工作需要進入服務工作時,該服務工作會加到服務工作的序列中,已經準備好的服務工作應依照優先權排程。The work scheduling and distribution management method for multi-core/core is described in claim 8, wherein the application work performed in each application core is arranged according to priority scheduling and allocation, and the sequence of application work It will be preempted. When the application work performed in one of the application cores needs to enter the service work, the service work will be added to the service work sequence, and the prepared service work should be scheduled according to the priority. 如申請專利範圍第8項所述的用於多核心/眾核心的工作排程與分配管理方法,其中進程間通訊包含有遠端程序呼叫(Remote Procedure Call,RPC)、訊號(Signal)、訊息傳遞(Message Passing)、共享記憶體(Shared Memory)、記憶體對應檔案(Memory-Mapped Files)、管道(Pipe)、套接(Socket)。The method for scheduling and assigning management for a multi-core/core core, as described in claim 8, wherein the inter-process communication includes a remote procedure call (RPC), a signal (Signal), and a message. Message Passing, Shared Memory, Memory-Mapped Files, Pipes, Sockets.
TW100139601A 2011-10-31 2011-10-31 Task scheduling and allocation for multi-core/many-core management framework and method thereof TWI442323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW100139601A TWI442323B (en) 2011-10-31 2011-10-31 Task scheduling and allocation for multi-core/many-core management framework and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW100139601A TWI442323B (en) 2011-10-31 2011-10-31 Task scheduling and allocation for multi-core/many-core management framework and method thereof

Publications (2)

Publication Number Publication Date
TW201317897A true TW201317897A (en) 2013-05-01
TWI442323B TWI442323B (en) 2014-06-21

Family

ID=48871963

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100139601A TWI442323B (en) 2011-10-31 2011-10-31 Task scheduling and allocation for multi-core/many-core management framework and method thereof

Country Status (1)

Country Link
TW (1) TWI442323B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI503742B (en) * 2014-04-21 2015-10-11 Nat Univ Tsing Hua Multiprocessors systems and processes scheduling methods thereof
CN109614249A (en) * 2018-12-04 2019-04-12 郑州云海信息技术有限公司 A kind of method, apparatus and computer readable storage medium for simulating multi-core communication

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI756974B (en) 2020-12-09 2022-03-01 財團法人工業技術研究院 Machine learning system and resource allocation method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI503742B (en) * 2014-04-21 2015-10-11 Nat Univ Tsing Hua Multiprocessors systems and processes scheduling methods thereof
CN109614249A (en) * 2018-12-04 2019-04-12 郑州云海信息技术有限公司 A kind of method, apparatus and computer readable storage medium for simulating multi-core communication
CN109614249B (en) * 2018-12-04 2022-02-18 郑州云海信息技术有限公司 Method, device and computer readable storage medium for simulating multi-core communication

Also Published As

Publication number Publication date
TWI442323B (en) 2014-06-21

Similar Documents

Publication Publication Date Title
US20130061220A1 (en) Method for on-demand inter-cloud load provisioning for transient bursts of computing needs
KR101953906B1 (en) Apparatus for scheduling task
Wieder et al. Efficient partitioning of sporadic real-time tasks with shared resources and spin locks
US9104500B1 (en) Lock-free job scheduler for multi-processor systems
Yang et al. Global real-time semaphore protocols: A survey, unified analysis, and comparison
US20100131956A1 (en) Methods and systems for managing program-level parallelism
WO2014101561A1 (en) Method and device for implementing multi-application parallel processing on single processor
KR101733117B1 (en) Task distribution method on multicore system and apparatus thereof
CN103136055A (en) Method and device used for controlling using of computer resource in data base service
JP2010079622A (en) Multi-core processor system and task control method thereof
JP7003874B2 (en) Resource reservation management device, resource reservation management method and resource reservation management program
WO2013185571A1 (en) Thread control and invoking method of multi-thread virtual assembly line processor, and processor thereof
Casini et al. Analyzing parallel real-time tasks implemented with thread pools
Yang et al. Resource-oriented partitioning for multiprocessor systems with shared resources
TWI442323B (en) Task scheduling and allocation for multi-core/many-core management framework and method thereof
Reano et al. Intra-node memory safe gpu co-scheduling
KR101694302B1 (en) Apparatus and method foe managing heterogeneous multicore processor system
US8977752B2 (en) Event-based dynamic resource provisioning
JP2020160482A (en) Performance estimation device, terminal device, system LSI and program
Negrean et al. Response-time analysis of arbitrarily activated tasks in multiprocessor systems with shared resources
Teng et al. Scheduling real-time workflow on MapReduce-based cloud
US11061730B2 (en) Efficient scheduling for hyper-threaded CPUs using memory monitoring
JP7122299B2 (en) Methods, apparatus, devices and storage media for performing processing tasks
Ruaro et al. Dynamic real-time scheduler for large-scale MPSoCs
JP2015164052A (en) Control program for multi-core processor, electronic apparatus, and control method