[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2016118164A1 - Scheduler-assigned processor resource groups - Google Patents

Scheduler-assigned processor resource groups Download PDF

Info

Publication number
WO2016118164A1
WO2016118164A1 PCT/US2015/012730 US2015012730W WO2016118164A1 WO 2016118164 A1 WO2016118164 A1 WO 2016118164A1 US 2015012730 W US2015012730 W US 2015012730W WO 2016118164 A1 WO2016118164 A1 WO 2016118164A1
Authority
WO
WIPO (PCT)
Prior art keywords
processor resource
scheduler
processor
queue
run
Prior art date
Application number
PCT/US2015/012730
Other languages
French (fr)
Inventor
Daniel Gmach
Vanish Talwar
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/012730 priority Critical patent/WO2016118164A1/en
Publication of WO2016118164A1 publication Critical patent/WO2016118164A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5012Processor sets

Definitions

  • Computers contain computational resources as a mechanism to execute applications. Computers are able to execute multiple processes based on scheduling time with resources for the processes to complete. On a single processor computer, multiple applications can appear to be running simultaneously when most processes are waiting while a single process is using th processor at any given time. The length of time to complete a process is affected by the scheduling policy managing access to each resource,
  • Figures 1 and 2 ar block diagrams depicting example scheduler systems.
  • FIG. 4 depicts example modules used to implement example scheduler systems
  • Figures 5 and 6 are flow diagrams depicting example methods of resource scheduling.
  • Compute devices commonly execute multiple applications (e.g., a group of associated, executable processes to perform a specific operation or set of operations).
  • Applications can require special schedulers, such as video streaming application that utilizes a real-time scheduler to guarantee Jitter free video provisioning.
  • a scheduler can maintain a run-queue according to a scheduler policy.
  • a fair-share scheduler can manage a run-queue to provision time on a processor resource to allow each process (e.g., an instance of an operation to perform on the system 100 ⁇ in the queue to receive the same time interval on the processor resource.
  • a real-time scheduler can maintain a run-queue where a priority process can take over the processor resource at any time and pause other resources with less priorit from utilizing the processor resource until the priority process is complete.
  • Personal computer systems commonly include multiple core processors and enterprise systems can include multiple central processing units ( CPU"). Management of the entire pool of processor resources with a single scheduler can Jerusalem to meet the requirements of each application even when multiple cores are available.
  • CPU central processing units
  • processor resource groups of a system supporting multiple processor resources where each processor resource group is associated with a scheduling policy provided by a scheduler and tasks are allocated to a processor resource group based on their scheduling
  • the cores of a system can he space partitioned into processor resource groups to allow a group of cores to execute one scheduling policy while a disjoint set of cores can execute another scheduling policy.
  • a compute apparatus can include a framework for supporting heterogeneous schedulers of an operating system to enable application execution with different scheduling requirements on the same physical system.
  • FIGS 1 and 2 are block diagrams depicting example scheduler systems 100 and 200.
  • the example scheduler system 100 of Figure 1 generally includes a processor resource assignment engine 104, a process assignment engine 106, and a plurality of processor resources 110.
  • the process assignment engine 108 can assign a process to a processor resource group maintained by the processor resource assignment engine 104 where each processor resource group is a subset of the plurality of processor resources 110 and each processor resource group is managed by a scheduler.
  • the example scheduler system 100 can include a container engine (not shown) to allow processes of the system 100 to be organized into groups that are associated with the processor resource groups. The functionality of the container engine is discussed herein with reference to container module 208 of Figure 2 and container engine 308 of Figure 3.
  • the processor resource assignment engine 1 4 represents any circuitry or combination of circuitry and executable instructions to maintain plurality of processor resource groups based on scheduler activity information.
  • Scheduler activit information is an state Information associated with the scheduler.
  • scheduler activity information can include whether a scheduler Is active (e.g. , whether an application or task has requested to be allocated a processor resource 110 using the policy of the scheduler), the number of processes assigned to a scheduler, and/or other information associated with the activity of the scheduler.
  • Each processor resource 1 10 is assignable to a processor resource group at run time to allow dynamic allocation of processor resources 1 0 to groups. For example, as a first processor resource group receives more processor resource requests than a second processor resource group, more processor resources 110 can be allocated to the first processor resource group than the second processor resource group.
  • the processor resource groups can represent a space partition of the plurality of processor resources 1 10.
  • the plurality of processor resources 1 10 can be divided into disjoint subsets of the plurality of processor resources 110 available on the system 100, where each processor resource group is one of the disjoint subsets.
  • the general purpose processor resources can be assigned to a first processor resource group and the special purpose processor resources can be assigned to a second processor resource group.
  • the space partition can be updated using system calls, such as kernel system calls that utilize control group settings to identify how the plurality of processor resources 110 should be isolated or otherwise limited in access to the plurality of processor resources 110.
  • Each processor resource 110 is managed by a scheduler designated to the processor resource group to which the processor resource 1 10 is assigned.
  • a general purpose processor resource of a first processor resource group can be managed by a fair-scheduler while a special purpose processor resource of a second processor resource group can be managed by a real-time scheduler.
  • the genera! purpose processor resource was reassigned to the second processor resource group then the general purpose processor would cease to be managed by the fair-share scheduler and Instead would become managed by the realtime scheduler.
  • the processor resource assignment engine 104 can reassign a processor resource from one processor resource group to another.
  • the processo resource assignment engine 104 can represent a combination of circuitry and
  • the reassignment by the processor resource assignment engine can be based on at least one of an active status of the scheduler, a change in a control group setting (i.e., a specification of a group associated with limitations on resource usage) associated with an application of the processor resource request, and a load balance strategy.
  • processor resources 110 can be reassigned to a group with higher than average utilization levels of processor resources of the group.
  • the processor resources 1 10 of the processor resource group designated to the real -time scheduler can be reallocated to the processor resource group designated to the fair-shar scheduler.
  • the processor resource assignment engine 104 can analyze the space partition based on scheduler activity information and a control parameter of the process. For example, the processor resource assignment engine 104 can identif the space partition lacks sufficient resources In one of the partitions (e.g., one of the processor resource groups) by gathering demand levels and utilization levels of the processor resources 110 in each partition to execute the processes of being assigned to the processor resource group to a achieve a quality-of-service QoS") threshold. The processor resource assignment engine 104 can identify whether a particular number of processor resources are available to execute processor resource requests according to a scheduler policy.
  • the processor resource assignment engine 104 can identify whether a particular number of processor resources are available to execute processor resource requests according to a scheduler policy.
  • the processor resource assignment engine 104 can wait until another core becomes available or migrate processes to another processor by pausing execution of the processes in a first run-queue and moving the processes to a second run-queue to empty the first run-queue and allow the core associated with the first run-queue to be available for reassignment to the processor resource group to be created for the gang scheduler,
  • the process assignment engine 106 represents any circuitry or combination of circuitry and executable instructions to manage assignment of a process to a processor resource 1 10 of the system 100.
  • the process assignment engine 108 can represent any circuitry or combination of circuitry and executable instructions to assign a processor resource request to one of the plurality of processor resource groups, identify a first processor resource 1 10 of the plurality of processor resources assigned to one of the processor resource groups, and enqueue the process associated with the processor resource request on a run-queue of the first processor resource 1 10.
  • a kernel can execute system calls via the process assignment engine 106 to organize processes into processor resource groups and the kernel can manage the processes using the schedulers assigned to the processor resource groups.
  • the process assignment engine 106 can assign a processor resource request based on a set of process characteristics and a scheduler policy.
  • the processor resource request may be a request for an application performing content streaming and based on that characteristic is associated with a scheduler policy for content streaming applications, such as a real-time scheduler having a scheduling policy to give the application access to the processor resource 110 in real-time.
  • 00173 process assignment engine 108 can identify a processor resource
  • processor resource 110 based on the assignment of the processor resource 1 10 to a processor resource group and enqueue the process of the processor resource request on the identified processor resource 110. For example, when the processor resources 1 10 are space partitioned, a processor resource 110 is selected from the space allocated to the processor resource group and then add the process to the queue of the processor resource 10 for execution of the process on the processor resource 1 10, Each processor resource 1 10 In a processor resource group executes the processes in the run-queue using the policy of the scheduler. For example, the run-queue can execute process in the queue according to a strategy of execution defined by a policy associated with the space in which the processor resource 110 of the associated run-queue is partitioned.
  • the strategy of the scheduler policy is the management method of the queue of processes, such as fair allotment of time with the processor resource 1 10 for a fair-share scheduling policy o a priority based allotment of time with the processor resource 1 10 for a real-time scheduler policy.
  • ⁇ 0018J Figure 2 depicts the example system 200 can comprise a memory
  • the processor resource 210 can be operatively coupled to a data store 202.
  • the memory resource 220 can contain a set of instructions that are executable by a processor resource 210, The set of instructions are operable to cause the processor resource 210 to perform operations of the system 200 when th set of instructions are executed by the processor resource 210.
  • the set of instructions stored on the memory resource 220 can be represented as a processor resource assignment module 204, a process assignment module 206, and a container module 208.
  • the processor resource assignment module 204, the process assignment module 208, and the container module 208 represent program instructions that when executed function as the processor resource assignment engine 104 of Figure 1 , the process assignment engine 108 of Figure 1 , and the container engine 308 of Figure 3, respectively.
  • the processor resource 210 can carry out a set of Instructions to execute the modules 204, 208, 208, and/or any other appropriate operations among and/or
  • the processor resource 210 can carry out a set of instructions to assign a processor resource group to a scheduler, maintain the processor resource group with a number of processo
  • scheduler policy for a task based on a control parameter, and enqueue the task to a run-queue of one of the processor resources 210 of the processor resource group assigned to the scheduler based on the determined scheduler policy.
  • the processor resource 210 can carry out a set of instructions to analyze a behavior of the task, determine a set of process characteristics for the task based on the behavior, identify which one of a plurality of schedulers is associated with the control parameter satisfied by the set of process characteristics, analyze a space partition of a plurality of processor resources 210 based on the scheduler activity information and the control parameter of the task, create the processor resource group when a threshold level of processor resources are available and the space partition lacks a subset for the scheduler, create a run-queue for processor resource 210 with the processor resource group, and assign the task to the created run-queue.
  • the processor resource 210 can carry out a set of instructions to analyze a demand level and utilization level of the processor resource group, determine whether the number of processor resources of the processor resource group are available to host the task based on the scheduler policy associated with the processor resource group and a QoS threshold, reassign a processor resource from a first processor resource group to a second processor resource group, and move the task from a first run-queue to a second run-queue when the processor resource is reassigned to the second processor resource group.
  • the processor resource 210 can be any appropriate circuitry capable of processing (e.g. compute) instructions, such as one or multiple processing elements capable of retrieving instructions from the memory resource 220 and executing those instructions.
  • the processor resource 210 can be a core of a processor that is able to process instructions retrieved by a memory controller of the processor.
  • the processor resource 210 can be a central processing unit CPU" that enables resource scheduling by fetching, decoding, and executing modules 204, 208, and 208.
  • Example processor resources 210 include at least one CPU, a
  • the processor resource 210 can include multiple processing elements that are integrated in a single device or distributed across devices.
  • the processor resource 210 can process the instructions serially, concurrently, or in partial concurrence.
  • the memory resource 220 and the data store 202 represent a medium to store data utilized and/or produced by the system 200.
  • the medium can be any non- transitory medium or combination of non-transitory mediums able to electronically store data, such as modules of the system 200 and/or data used by the system 200,
  • the medium can be a storage medium, which is distinct from a transitory transmission medium, such as a signal.
  • the medium can be machine-readable, such as computer-readable.
  • the medium can b an electronic, magnetic, optical, or other physical storage device that is capable of containing (i.e., storing) executable
  • the memory resource 220 can be said to store program instructions that when executed by the processor resource 210 cause the processor resource 21 to impiement functionality of the system 200 of Figure 2.
  • the memory resource 220 can be integrated in the same device as the processor resource 210 or it can be separate but accessible to thai device and the processor resource 210.
  • the memory resource 220 can be distributed across devices.
  • the memory resource 220 and the data store 202 can represent the same physical medium or separate physical mediums.
  • the data of the data store 202 can include representations of data and/or information mentioned herein.
  • the data store 202 of Figure 2 can contain information utilized by processor resources 210 executing the modules 204, 206, and 208 of Figure 2. the engines 104 and 108 of Figure 1 , and the engine 308 of Figure 3.
  • the data store 202 can store a container description, a characteristic of a process, a control group setting, scheduler activity information, space partition Information, etc.
  • the data store 302 of Figure 3 can be the same as data store 202 of Figure 2.
  • the system 200 can include the executable instructions can be part of an installation package that when installed can be executed by the processor resource 210 to perform operations of the system 200, such as methods described with regards to Figures 4-6.
  • the memory resource 220 can be a portable medium such as a compact disc, a digital video disc, a flash drive, or memory maintained by a computer device from which the installation package can be downloaded and installed.
  • the executable instructions can be part of an application o applications already installed.
  • the memor resource 220 can be a non-volatile memory resource such as read only memory (“ROM”), a volatile memory resource such as random access memory (“RAM”), a storage device, o a combination thereof.
  • Example forms of a memory resource 220 include static RAM (“SRAM”), dynamic RAM (“DRAM”), electrically erasable programmable ROM EEPROM”), flash memory, or the like.
  • the memory resource 220 can include integrated memory such as a hard drive (“HD”), a solid state drive (“SSD”), or an optical drive.
  • Figure 3 depicts example environments in which various example scheduler systems can be implemented-
  • the example environment 390 is shown to Include an example system capable of resource scheduling where the system includes a processing unit 330 with any number of cores 310.
  • Example environments 390 include a multi-core compute device executing an operating system, such as LINUX kernel, to manage system resources including the processing unit 330.
  • the system (described herein with respect to Figures 1 and 2 ⁇ can represent generally any circuitry or combination of circuitry and executable instructions to schedule processor resource requests in a multi-scheduler environment.
  • the system can include a processor resource assignment engine 304 (as shown in 3A) and a process assignment engine 306 (as shown in 3B) that are the same as the processor resource assignment engine 104 and the process assignment engine 106 of Figure 1 , respectively, and the associated descriptions are not repeated for brevity.
  • Figure 38 includes a container engine 308.
  • the container engine 308 represents any circuitry or combination of circuitry and executable instructions to maintain a plurality of containers 336
  • a container 336 represents a group of processes assignable to a processor resource group.
  • a container 336 can be represented b a control group of parameter that isolate and/or shield processes in the container, in that example, scripts from a kernei can be used to manage the processes and groups of processes, such as applications 338.
  • Each container 336 can be associated with a description.
  • a plurality of containers 336 can each be described with a difference characteristic so that applications 338 associated with that characteristic are placed in the associated container 336.
  • the plurality of containers 336 are assignable to the plurality of processor resource groups 332 based on the scheduler policy associated with the groups. In this manner, characteristics of the applications 338 can be used to organize assignment of processes to processor resource groups 332 and, in turn, processor resources 310 assigned to schedulers 334 are to accept processes with the characteristics assigned to the processor resource group 332 at runtime. In other words, processes can be assigned to schedulers 334 that match the processor resource requirements of the process, such as assigning a process with realtime processing requirements to a processor resourc group 332 managed by a realtime scheduler 334.
  • the container engine 308 can reassign a first container of the plurality of containers 336 from a first processor resource group 332 to a second processor resource group 332 based on a change in a container description.
  • the container description may be updated with a new set of process characteristics of processes to be designated to the container 336 ⁇ e.g., placed within the container), and the container assignment can adapt to a different processor resource group 332 based on the change in container description.
  • the container engine 308 can include at least one of an application analysis engine 322 and an application interface engine 324,
  • the application analysis engine 322 represents circuitry or combination of circuitry and executable instructions to infer which scheduler best matches behavior of an application 338 associated with the processor resource request.
  • the application analysis engine 322 can compare the behavior of the application 338 (as described by a set of characteristics of the application 338) to the scheduler policies available by the processor resource groups 332.
  • the application interface engine 324 represents circuitry or combination of circuitry and executable instructions to enable user-supplied parameters to determine a set of control parameters associated with the plurality of containers 336.
  • a user can set a control group setting as a description of a container 336 and the control group parameters of a process can he used to determine which container 336 is to receive the process (e.g., matching control group parameters to the container description),
  • FIGS 3A and 3B demonstrate that the plurality of processor resources 310 can be cores of a processing unit 330 : such as a CPU.
  • the cores 310 are to be divided among processor resource groups 332 by the processor resource assignment engine 304 based on scheduler activity Information associated with the plurality of schedulers 334 of the system.
  • a core 310 can be assigned to a processor resource group 332 that is statically assigned to a scheduler 334. For example, whenever a new scheduler 334 is Introduced to a system a processor resource group 334 can be created to manage processor resources 310 according the policy of the new scheduler 334.
  • applications 338 can be organized into containers 336, such as process containers.
  • the system can grou processes into hierarchies or process subsets where each hierarchy or process subset is to be managed by a subsystem (e.g., managed by a scheduler 334 designated to a processor resource group 332 and restricted from access to resources outside the processor resource group 332.)
  • the process assignment engine 308 can manage the containers 336 by determining which container 338 is to be assigned to which processor resource group 332.
  • the doited line over the processing unit 330 in Figure 3B designates the boundary of the space partition of the cores 310, where, for example, the processor resource group A is restricted to access cores on the left of the dotted line and the processor resource group B is restricted to access cores on the right side of the dotted line.
  • the engines 304 and 308 can be integrated into a compute device, such as a personal computer, a server, a mobile device, or a network element.
  • the engines 304 and 308 can be integrated via circuitry or as installed instructions into a memory resource of the compute device.
  • Any appropriate combination of the system 300 and compute devices can be a virtual instance of a resource of a virtual shared pool of resources.
  • the engines and/or modules of the system herein can reside and/or execute "on the cloud" (e.g., reside and/or execute on a virtual shared pool of resources).
  • a hypervisor can be adapted to schedule resources using processor resource groups 332.
  • the engines 104 and 108 of Figure 1 ; the modules 204, 206, and 208 of Figure 2; and the engines 304, 306, 308, 322, and 324 are described as circuitry or a combination of circuitry and executable instructions. Such components can he implemented in a number of fashions.
  • the executable instructions can be processor-executable instructions, such as program instructions, stored on the memory resource 220, which is a tangible, non-transitory computer-readable storage medium, and the circuitry can be electronic circuitry, such as processor resource 210, for executing those Instructions.
  • the instructions residing on the memory resource 220 can comprise any set of instructions to be executed directly (such as machine code) or indirectly (such as a script) by the processor resource 210,
  • the engines 104, 106, and 308 and/or the modules 204, 208, and 208 can be integrated in a single compute device or distributed across multiple compute devices.
  • the engine and/or modules can complete or assist completion of operations performed in describing another engine and/o module.
  • the processor resource assignment engine 304 of Figure 3A can request, complete, or perform the methods or operations described with the processor resource assignment engine 104 of Figure 1 as well as the process assignment engine 106 and the container engine 308 of Figure 1 ,
  • the various engines and modules are shown as separate engines in Figures 1 and 2, in other implementations, the functionality of multiple engines and/or modules may be implemented as a single engine and/or module or divided in a variety of engines and/or modules. In som example, the engines of the system can perform example methods described in connection with Figures 4-6.
  • FIG. 4 depicts example modules used to implement example scheduler systems.
  • the example modules of Figure 4 generally include a container module 408, a process assignment module 406, and a processor resource assignment module 404.
  • the example modules of Figure 4 can be implemented on a compute device to schedule a processes on a system with a pluralit of processor resources.
  • a processor resource request 458 is made to the system.
  • a processor resource executing the container module 408 receives the processor resource request 458 and identifies in which container to place the processor resource request 458 based on the application behavior 460 of the task making the request 458 and any parameters 482, such as control parameters to facilitate a decision based on control group settings.
  • the container module 408 represents program instructions that are similar to the container module 208 of Figure 2.
  • the container module 408 can include program instructions, such as the application analysis module 440 and the application interface module 442, to facilitate the container selection decision.
  • the application analysis module 440 represents program instructions that when executed by a processor resource cause the processor resource to determine whether a scheduler policy would be sufficient for the task of the processor resource request 458 based on the application behavior 460.
  • the application interface module 442 represents program instructions that when executed by a processor resource cause the processor resource to accept user-supplied parameters, such as parameters 482, and determine a set of control parameters associated with the plurality of containers based on the parameters.
  • Th process assignment module 406 represents program instructions similar to the program instructions of the process assignment module 208 of Figure 2.
  • the process assignment module 406 can include program instructions, such as the scheduler analysis moduie 444 and the scheduler change module 446, to facilitate assignment of processes to processor resource groups based on which container the processes are associated with.
  • the scheduler analysis module 406 represents program instructions that when executed by a processor resource cause the processor resource to determine which scheduler to assign to the container based on a container description 463 and a scheduler list 464 containing a list of schedulers offered by the system.
  • a processor resource executing the scheduler analysis moduie 444 can identify a scheduler policy that conforms to the parameters and process characteristics of the processor resource request 458.
  • the scheduler change module 448 represents program instructions that when executed by a processor resource cause the processor resource to identify whether sufficient resources exist to assign the task to the scheduler.
  • a processor resource executing the scheduler change module 448 can identify there is a lack of resources available to execute the selected scheduler, and, in response, select a different scheduler that next-best matche the parameters and/or process characteristics of the processor resource request 458 can be selected to assign the task to the secondary scheduler.
  • resources may be available, but are not ye! allocated to the processor resource group before enqueuing the task.
  • the processor resource assignment module 404 represents program instructions that are similar to the processor resource assignment module 204 of Figure 2.
  • the processor resource assignment modul 404 can include program instructions, such as the core monitor module 448, the core analysis moduie 450, and a core change module 452, to facilitate maintenance of the plurality of processor resourc groups.
  • a processor resource executing the processor resource assignment module 404 can utilize the scheduler activity information 466, a core list 470 ⁇ which represents a list of processor resources of the system), and core activity information 472 (which represents the operational statistics of the plurality of processor resources of the system).
  • the task associated with the processor resource request 458 is ushered to a run-queue of a processor resource in the processor resource group of the scheduler selected by the processor resource executing the process assignment module 408 via the processor resource run-queue assignment 474 operation.
  • the core monitor module 448 represents program instructions that when executed by a processor resource cause the processor resource to monitor the assignment of processor resource to schedulers and the set of process tasks hosted by processor resources associated with the schedulers. For example, the demand level and the utilization level of the processor resources associated with a scheduler can be observed by a processor resource executing the core monitor module 448.
  • Th core analysis module 450 represents program instructions that when executed by a processor resource cause the processor resource to analyze the demand level of the processor resources associated with the applications using the scheduler. For example, the demand level of th processor resources can be compared to a QoS threshold. The demand levels of each processor resources can be aggregated to scheduler demand level.
  • the core change module 452 represents program instructions that when executed by a processor resource cause the processor resource to maintain the space partition of the plurality of processor resources based on the scheduler demand level.
  • a processor resource executing the core change module 452 can facilitate a change in the space partition by migrating tasks from the run-queues of any processor resources designated to change to other processor resources of the same processor resource group.
  • the processor resource executing the core change module 452 can verify the run-queues of the selected processor resources are empty and change the processor resources with empty run-queues to the processor resource group of a different scheduler.
  • a load balance technique can be used by the processor resource that executes the cor change module 452.
  • Figures 5 and 6 are flow diagrams depicting example methods of resource scheduling.
  • exampl methods of resource scheduling can generally comprise identifying a scheduler for the task, assigning a processor resource group to the scheduler., and enqueuing the task on a run-queue of a processor resource in the processor resource group.
  • a scheduler for the task is identified.
  • the scheduler is identified based on the control parameter associated with a task characteristic.
  • the task characteristic should accurately describe the behavior associated with the task and/or the application from which the task was derived so that the appropriate scheduler Is identified for the tasks of the application.
  • a task can be designated to a processor resource group that Is different from the application and/or another task of the application.
  • a processor resource group Is assigned to the scheduler.
  • the processor resource group can b assigned based on the scheduler activity information. For example, if the scheduler became flagged to operate when a task is assigned to the scheduler, then the state of the scheduler would he changed to active and should have a processor resource group associated with scheduler.
  • a processor resource group can be created for a scheduler when a schedule is not associated with a processor resource group and the scheduler is assigned a task.
  • the task is enqueued on a run-queue of a processor resource in the assigned processor resource group, A task is enqueued by placing the task into a queue.
  • the run-queue is managed by the scheduler and the task receives access to the processor resource based on the strategy of the scheduler policy. For example, the task can be moved to the front of the queue when the task has a highest priority level set and the scheduler policy takes priority into consideration, whereas a fair-share policy may send the same task to the tail of the queue when the fair-share scheduler policy does not take priority into consideration.
  • Figure 8 includes blocks similar to blocks of Figure 5 and provides addstionai blocks and details.
  • Figure 6 depicts additional blocks and details generally regarding selecting a container for a task and maintaining a processor resource group.
  • Blocks 604 and 608 are similar to blocks 504 and 508 of Figure 5 and, for brevity, their respective descriptions are not repeated in their entirety.
  • Block 608 represents an embodiment of block 506 as represented by blocks 812, 814, and 618 where the specific descriptions of blocks 81 , 614, and 616 are encompassed by the genera! description of block 508.
  • a container is selected for the task.
  • the container Is selected based on an application characteristic associated with the task. For example, a word processing application can operate with equal priority to other applications on the computer and be placed in a container with a description associated with normal
  • a content streaming application can require a certain amount of resources based on the speed of buffering and can be added to a container described with parameters for content streaming.
  • the container description can Include a control parameter associated with the application characteristic, such as a "real-time processing * parameter associated with a "streaming" characteristic.
  • the scheduler is identified based on the container description associated with the container, in this manner, the container description should accurately describe the applications associated with the container so that the appropriate scheduler is identified for the tasks of the applications
  • assignment of the processor resource group to a scheduler based on scheduler activity information can Include initiating the scheduler and creating the processor resource group.
  • a scheduler flag is set. The setting of the flag can identify to the operating system that the scheduler is available to schedule tasks on a processor resource. The scheduler can assign a task to a run-queue when the scheduler flag is set,
  • a processor resource group may need to be created for the scheduler.
  • a run-queue and setting information is created.
  • the run-queue and setting information are associated with a processor resource of the processor resource group to allow for the processor resource to accept management policy operations from the scheduler.
  • the run-queue and setting information may be maintained based on the status on the scheduler flag, such as when the scheduler flag is set.
  • a processor resource group can adjust based on scheduler activity information.
  • a number of processor resources of a first processor resource group is changed based on the scheduler activity information.
  • the number of processor resources assigned to a processo resource group can vary dynamically based on the scheduler activity information, a QoS parameter, and a number of tasks in assigned to a processor resource (e.g. . , the number of tasks in a run-queue of a processor resource).
  • a processor resource is reassigned based on scheduler activity information.
  • Processor resource allocation and container assignment may be adjusted dynamically during runtime. For example, scheduler activity information can be updated based on user input or a system event and the space partition and/or the number of containers assigned fo a processor resource group should adapt to the update.
  • a plurality of processor resources are monitored and the scheduler activity information is gathered from the plurality of processor resources at block 822,
  • the scheduler activity information can be collected based on a demand level and utilization level of the processor resources associated with the processor resource group assigned to the scheduler. For example, processor resource demand levels and utilization levels that achieve certain demand and/or utilization minimums.
  • a space partition of the plurality of processor resource groups is changed based on the scheduler activity information gathered at block 622.
  • any queued tasks of first run-queue of a first processor resource are migrated to a second run-queue of a second processor resource in the same processor resource group as the first processor resource. This happens because the tasks are fo be executed against the originally associated scheduler, hut the first processor resource Is being assigned to another processor resource group.
  • the first run-queue information of the first run-queue is replaced with different run-queue information associated with a different scheduler based on the processor resource grou to which the first processor has moved to.
  • the run-queue information should be replaced when the run-queue is empty as to not interfere with operations of the processor resource.
  • the update to the space partition can be accomplished when the processor resource designated to chang processor resource groups is free from current processes and the run-queue can receive a process once the setting information is updated with the new scheduler information associated with the processor resource group it joined, in this manner, space partitioning of the plurality of processor resources can be achieved dynamically during runtime, schedulers can be flexibly added or removed from a system, and multiple types of schedulers can manage processes concurrently on the same system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

In one implementation, a scheduler system includes a plurality of processor resource, a processor resource assignment engine to maintain a plurality of processor resource groups based on scheduler activity information, and a process assignment engine to assign a processor resource request to one of the plurality of resource groups, identify a processor resource of the plurality of resources assigned to the one of the plurality of resource groups, and enqueue a process associated with the processor resource request on a run-queue of the processor resource based on a strategy of the scheduler policy.

Description

Sched ler-a sj£ n©d Processor Resource Qrou
BACKGROUND OOOI3 Computers contain computational resources as a mechanism to execute applications. Computers are able to execute multiple processes based on scheduling time with resources for the processes to complete. On a single processor computer, multiple applications can appear to be running simultaneously when most processes are waiting while a single process is using th processor at any given time. The length of time to complete a process is affected by the scheduling policy managing access to each resource,
BRIEF DESCRIPTION OF THE DRAWINGS
|0§O2] Figures 1 and 2 ar block diagrams depicting example scheduler systems.
[O0O3J Figures 3A and 3B depict example logical blocks representing an example environment in which various scheduler systems can be implemented
[00041 Figure 4 depicts example modules used to implement example scheduler systems,
| 0 5] Figures 5 and 6 are flow diagrams depicting example methods of resource scheduling.
DETAILED DESCRIPTION
[000$] In the following description and figures, some example implementations of scheduler apparatus, scheduler systems, and/or methods of resource scheduling are described. Compute devices commonly execute multiple applications (e.g., a group of associated, executable processes to perform a specific operation or set of operations). Applications can require special schedulers, such as video streaming application that utilizes a real-time scheduler to guarantee Jitter free video provisioning. A scheduler can maintain a run-queue according to a scheduler policy. For example, a fair-share scheduler can manage a run-queue to provision time on a processor resource to allow each process (e.g., an instance of an operation to perform on the system 100} in the queue to receive the same time interval on the processor resource. For another example, a real-time scheduler can maintain a run-queue where a priority process can take over the processor resource at any time and pause other resources with less priorit from utilizing the processor resource until the priority process is complete.
Personal computer systems commonly include multiple core processors and enterprise systems can include multiple central processing units ( CPU"). Management of the entire pool of processor resources with a single scheduler can fait to meet the requirements of each application even when multiple cores are available.
0007] Various examples described below relate to creating processor resource groups of a system supporting multiple processor resources where each processor resource group is associated with a scheduling policy provided by a scheduler and tasks are allocated to a processor resource group based on their scheduling
requirements. For example, the cores of a system can he space partitioned into processor resource groups to allow a group of cores to execute one scheduling policy while a disjoint set of cores can execute another scheduling policy. In this manner, a compute apparatus can include a framework for supporting heterogeneous schedulers of an operating system to enable application execution with different scheduling requirements on the same physical system.
00081 The terms Include," "have," and variations thereof, as used herein, mean the same as the term "comprise" or appropriate variation thereof. Furthermore, the term "based on," as used herein, means "based at least in part on," Thus, a feature that is described as based on some stimulus can be based only on the stimulus or a combination of stimuli including the stimulus. Furthermore, the term "maintain" (and variations thereof) as used herein means "to create, delete, add, remove, access, update, and/or modify."
[0009] Figures 1 and 2 are block diagrams depicting example scheduler systems 100 and 200. Referring to Figure 1, the example scheduler system 100 of Figure 1 generally includes a processor resource assignment engine 104, a process assignment engine 106, and a plurality of processor resources 110. in general, the process assignment engine 108 can assign a process to a processor resource group maintained by the processor resource assignment engine 104 where each processor resource group is a subset of the plurality of processor resources 110 and each processor resource group is managed by a scheduler. The example scheduler system 100 can include a container engine (not shown) to allow processes of the system 100 to be organized into groups that are associated with the processor resource groups. The functionality of the container engine is discussed herein with reference to container module 208 of Figure 2 and container engine 308 of Figure 3.
[0010] The processor resource assignment engine 1 4 represents any circuitry or combination of circuitry and executable instructions to maintain plurality of processor resource groups based on scheduler activity information. Scheduler activit information is an state Information associated with the scheduler. For example, scheduler activity information can include whether a scheduler Is active (e.g. , whether an application or task has requested to be allocated a processor resource 110 using the policy of the scheduler), the number of processes assigned to a scheduler, and/or other information associated with the activity of the scheduler. Each processor resource 1 10 is assignable to a processor resource group at run time to allow dynamic allocation of processor resources 1 0 to groups. For example, as a first processor resource group receives more processor resource requests than a second processor resource group, more processor resources 110 can be allocated to the first processor resource group than the second processor resource group.
|0O11| The processor resource groups can represent a space partition of the plurality of processor resources 1 10. For example, the plurality of processor resources 1 10 can be divided into disjoint subsets of the plurality of processor resources 110 available on the system 100, where each processor resource group is one of the disjoint subsets. For another example, on a system 100 that Includes general purpose processor resources and special purpose processor resources, the general purpose processor resources can be assigned to a first processor resource group and the special purpose processor resources can be assigned to a second processor resource group. The space partition can be updated using system calls, such as kernel system calls that utilize control group settings to identify how the plurality of processor resources 110 should be isolated or otherwise limited in access to the plurality of processor resources 110.
100123 Each processor resource 110 is managed by a scheduler designated to the processor resource group to which the processor resource 1 10 is assigned. For example, a general purpose processor resource of a first processor resource group can be managed by a fair-scheduler while a special purpose processor resource of a second processor resource group can be managed by a real-time scheduler. In that example, if the genera! purpose processor resource was reassigned to the second processor resource group then the general purpose processor would cease to be managed by the fair-share scheduler and Instead would become managed by the realtime scheduler.
[00131 The processor resource assignment engine 104 can reassign a processor resource from one processor resource group to another. For example, the processo resource assignment engine 104 can represent a combination of circuitry and
executable instructions to reassign a first processor resource from a first processor resource group of the plurality of processor resource groups to a second processor resource group of the plurality of processor resource groups. The reassignment by the processor resource assignment engine can be based on at least one of an active status of the scheduler, a change in a control group setting (i.e., a specification of a group associated with limitations on resource usage) associated with an application of the processor resource request, and a load balance strategy. For example, processor resources 110 can be reassigned to a group with higher than average utilization levels of processor resources of the group. For another example, if all applications requesting real-time scheduling complete, the processor resources 1 10 of the processor resource group designated to the real -time scheduler can be reallocated to the processor resource group designated to the fair-shar scheduler.
00143 The processor resource assignment engine 104 can analyze the space partition based on scheduler activity information and a control parameter of the process. For example, the processor resource assignment engine 104 can identif the space partition lacks sufficient resources In one of the partitions (e.g., one of the processor resource groups) by gathering demand levels and utilization levels of the processor resources 110 in each partition to execute the processes of being assigned to the processor resource group to a achieve a quality-of-service QoS") threshold. The processor resource assignment engine 104 can identify whether a particular number of processor resources are available to execute processor resource requests according to a scheduler policy. For example, if a gang scheduler that uses four cores is requested, but only three cores are available, the processor resource assignment engine 104 can wait until another core becomes available or migrate processes to another processor by pausing execution of the processes in a first run-queue and moving the processes to a second run-queue to empty the first run-queue and allow the core associated with the first run-queue to be available for reassignment to the processor resource group to be created for the gang scheduler,
[00151 The process assignment engine 106 represents any circuitry or combination of circuitry and executable instructions to manage assignment of a process to a processor resource 1 10 of the system 100. For example, the process assignment engine 108 can represent any circuitry or combination of circuitry and executable instructions to assign a processor resource request to one of the plurality of processor resource groups, identify a first processor resource 1 10 of the plurality of processor resources assigned to one of the processor resource groups, and enqueue the process associated with the processor resource request on a run-queue of the first processor resource 1 10. For another example, a kernel can execute system calls via the process assignment engine 106 to organize processes into processor resource groups and the kernel can manage the processes using the schedulers assigned to the processor resource groups.
|0016] The process assignment engine 106 can assign a processor resource request based on a set of process characteristics and a scheduler policy. For example, the processor resource request may be a request for an application performing content streaming and based on that characteristic is associated with a scheduler policy for content streaming applications, such as a real-time scheduler having a scheduling policy to give the application access to the processor resource 110 in real-time. |00173 process assignment engine 108 can identify a processor resource
110 based on the assignment of the processor resource 1 10 to a processor resource group and enqueue the process of the processor resource request on the identified processor resource 110. For example, when the processor resources 1 10 are space partitioned, a processor resource 110 is selected from the space allocated to the processor resource group and then add the process to the queue of the processor resource 10 for execution of the process on the processor resource 1 10, Each processor resource 1 10 In a processor resource group executes the processes in the run-queue using the policy of the scheduler. For example, the run-queue can execute process in the queue according to a strategy of execution defined by a policy associated with the space in which the processor resource 110 of the associated run-queue is partitioned. The strategy of the scheduler policy is the management method of the queue of processes, such as fair allotment of time with the processor resource 1 10 for a fair-share scheduling policy o a priority based allotment of time with the processor resource 1 10 for a real-time scheduler policy.
{0018J Figure 2 depicts the example system 200 can comprise a memory
resourc 220 operatively coupled to a processor resource 210. The processor resource 210 can be operatively coupled to a data store 202.
00193 Referring to Figure 2( the memory resource 220 can contain a set of instructions that are executable by a processor resource 210, The set of instructions are operable to cause the processor resource 210 to perform operations of the system 200 when th set of instructions are executed by the processor resource 210. The set of instructions stored on the memory resource 220 can be represented as a processor resource assignment module 204, a process assignment module 206, and a container module 208.
[0020] The processor resource assignment module 204, the process assignment module 208, and the container module 208 represent program instructions that when executed function as the processor resource assignment engine 104 of Figure 1 , the process assignment engine 108 of Figure 1 , and the container engine 308 of Figure 3, respectively. The processor resource 210 can carry out a set of Instructions to execute the modules 204, 208, 208, and/or any other appropriate operations among and/or
8 associated with the modules of the system 200, For example, the processor resource 210 can carry out a set of instructions to assign a processor resource group to a scheduler, maintain the processor resource group with a number of processo
resources 210 based on scheduler activity information during runtime, determine scheduler policy for a task based on a control parameter, and enqueue the task to a run-queue of one of the processor resources 210 of the processor resource group assigned to the scheduler based on the determined scheduler policy. For another example, the processor resource 210 can carry out a set of instructions to analyze a behavior of the task, determine a set of process characteristics for the task based on the behavior, identify which one of a plurality of schedulers is associated with the control parameter satisfied by the set of process characteristics, analyze a space partition of a plurality of processor resources 210 based on the scheduler activity information and the control parameter of the task, create the processor resource group when a threshold level of processor resources are available and the space partition lacks a subset for the scheduler, create a run-queue for processor resource 210 with the processor resource group, and assign the task to the created run-queue. For yet another example, the processor resource 210 can carry out a set of instructions to analyze a demand level and utilization level of the processor resource group, determine whether the number of processor resources of the processor resource group are available to host the task based on the scheduler policy associated with the processor resource group and a QoS threshold, reassign a processor resource from a first processor resource group to a second processor resource group, and move the task from a first run-queue to a second run-queue when the processor resource is reassigned to the second processor resource group.
{0021 ] Although these particular modules and various other modules are illustrated and discussed in relation to Figure 2 and other example implementations, other combinations or sub-combinations of modules can be included within other implementations. Said differently, although the modules illustrated in Figure 2 and discussed in other example Implementations perform specific functionalities in the examples discussed herein, these and other functionalities can be accomplished, implemented, or realized at different modules or at combinations of modules. For example, two or more modules illustrated and/or discussed as separate can h combined into a module that performs the functionalities discussed in relation to the two modules. As another example, functionalities performed at one module as discussed in relation: to these examples can be performed at a different module or different modules. Figure 4 depicts yet another example of how functionality can be organized into modules,
[Q022J The processor resource 210 can be any appropriate circuitry capable of processing (e.g. compute) instructions, such as one or multiple processing elements capable of retrieving instructions from the memory resource 220 and executing those instructions. For example, the processor resource 210 can be a core of a processor that is able to process instructions retrieved by a memory controller of the processor. For another example, the processor resource 210 can be a central processing unit CPU") that enables resource scheduling by fetching, decoding, and executing modules 204, 208, and 208. Example processor resources 210 include at least one CPU, a
semiconductor-based microprocessor, an application specific integrated circuit ("ASIC"), a field-programmable gate array FPGA"), and the like. The processor resource 210 can include multiple processing elements that are integrated in a single device or distributed across devices. The processor resource 210 can process the instructions serially, concurrently, or in partial concurrence.
[O023J The memory resource 220 and the data store 202 represent a medium to store data utilized and/or produced by the system 200. The medium can be any non- transitory medium or combination of non-transitory mediums able to electronically store data, such as modules of the system 200 and/or data used by the system 200, For example, the medium can be a storage medium, which is distinct from a transitory transmission medium, such as a signal. The medium can be machine-readable, such as computer-readable. The medium can b an electronic, magnetic, optical, or other physical storage device that is capable of containing (i.e., storing) executable
instructions. The memory resource 220 can be said to store program instructions that when executed by the processor resource 210 cause the processor resource 21 to impiement functionality of the system 200 of Figure 2. The memory resource 220 can be integrated in the same device as the processor resource 210 or it can be separate but accessible to thai device and the processor resource 210. The memory resource 220 can be distributed across devices. The memory resource 220 and the data store 202 can represent the same physical medium or separate physical mediums. The data of the data store 202 can include representations of data and/or information mentioned herein.
[0024] The data store 202 of Figure 2 can contain information utilized by processor resources 210 executing the modules 204, 206, and 208 of Figure 2. the engines 104 and 108 of Figure 1 , and the engine 308 of Figure 3. For example, the data store 202 can store a container description, a characteristic of a process, a control group setting, scheduler activity information, space partition Information, etc. The data store 302 of Figure 3 can be the same as data store 202 of Figure 2.
[0025] in some examples, the system 200 can include the executable instructions can be part of an installation package that when installed can be executed by the processor resource 210 to perform operations of the system 200, such as methods described with regards to Figures 4-6. in that example, the memory resource 220 can be a portable medium such as a compact disc, a digital video disc, a flash drive, or memory maintained by a computer device from which the installation package can be downloaded and installed. In another example, the executable instructions can be part of an application o applications already installed. The memor resource 220 can be a non-volatile memory resource such as read only memory ("ROM"), a volatile memory resource such as random access memory ("RAM"), a storage device, o a combination thereof. Example forms of a memory resource 220 include static RAM ("SRAM"), dynamic RAM ("DRAM"), electrically erasable programmable ROM EEPROM"), flash memory, or the like. The memory resource 220 can include integrated memory such as a hard drive ("HD"), a solid state drive ("SSD"), or an optical drive.
[0026] Figure 3 depicts example environments in which various example scheduler systems can be implemented- The example environment 390 is shown to Include an example system capable of resource scheduling where the system includes a processing unit 330 with any number of cores 310. Example environments 390 include a multi-core compute device executing an operating system, such as LINUX kernel, to manage system resources including the processing unit 330. The system (described herein with respect to Figures 1 and 2} can represent generally any circuitry or combination of circuitry and executable instructions to schedule processor resource requests in a multi-scheduler environment. The system can include a processor resource assignment engine 304 (as shown in 3A) and a process assignment engine 306 (as shown in 3B) that are the same as the processor resource assignment engine 104 and the process assignment engine 106 of Figure 1 , respectively, and the associated descriptions are not repeated for brevity.
[0027] Figure 38 includes a container engine 308. The container engine 308 represents any circuitry or combination of circuitry and executable instructions to maintain a plurality of containers 336, A container 336 represents a group of processes assignable to a processor resource group. For example, a container 336 can be represented b a control group of parameter that isolate and/or shield processes in the container, in that example, scripts from a kernei can be used to manage the processes and groups of processes, such as applications 338. Each container 336 can be associated with a description. For example, a plurality of containers 336 can each be described with a difference characteristic so that applications 338 associated with that characteristic are placed in the associated container 336. The plurality of containers 336 are assignable to the plurality of processor resource groups 332 based on the scheduler policy associated with the groups. In this manner, characteristics of the applications 338 can be used to organize assignment of processes to processor resource groups 332 and, in turn, processor resources 310 assigned to schedulers 334 are to accept processes with the characteristics assigned to the processor resource group 332 at runtime. In other words, processes can be assigned to schedulers 334 that match the processor resource requirements of the process, such as assigning a process with realtime processing requirements to a processor resourc group 332 managed by a realtime scheduler 334.
[0O2S] The container engine 308 can reassign a first container of the plurality of containers 336 from a first processor resource group 332 to a second processor resource group 332 based on a change in a container description. For example, the container description may be updated with a new set of process characteristics of processes to be designated to the container 336 {e.g., placed within the container), and the container assignment can adapt to a different processor resource group 332 based on the change in container description.
The container engine 308 can include at least one of an application analysis engine 322 and an application interface engine 324, The application analysis engine 322 represents circuitry or combination of circuitry and executable instructions to infer which scheduler best matches behavior of an application 338 associated with the processor resource request. For example, the application analysis engine 322 can compare the behavior of the application 338 (as described by a set of characteristics of the application 338) to the scheduler policies available by the processor resource groups 332. The application interface engine 324 represents circuitry or combination of circuitry and executable instructions to enable user-supplied parameters to determine a set of control parameters associated with the plurality of containers 336. For example, a user can set a control group setting as a description of a container 336 and the control group parameters of a process can he used to determine which container 336 is to receive the process (e.g., matching control group parameters to the container description),
|O03O] Figures 3A and 3B demonstrate that the plurality of processor resources 310 can be cores of a processing unit 330 : such as a CPU. The cores 310 are to be divided among processor resource groups 332 by the processor resource assignment engine 304 based on scheduler activity Information associated with the plurality of schedulers 334 of the system. Referring to Figure 3A, a core 310 can be assigned to a processor resource group 332 that is statically assigned to a scheduler 334. For example, whenever a new scheduler 334 is Introduced to a system a processor resource group 334 can be created to manage processor resources 310 according the policy of the new scheduler 334. Referring to Figure 38, applications 338 can be organized into containers 336, such as process containers. For example, the system can grou processes into hierarchies or process subsets where each hierarchy or process subset is to be managed by a subsystem (e.g., managed by a scheduler 334 designated to a processor resource group 332 and restricted from access to resources outside the processor resource group 332.) The process assignment engine 308 can manage the containers 336 by determining which container 338 is to be assigned to which processor resource group 332. The doited line over the processing unit 330 in Figure 3B designates the boundary of the space partition of the cores 310, where, for example, the processor resource group A is restricted to access cores on the left of the dotted line and the processor resource group B is restricted to access cores on the right side of the dotted line.
[0031] The engines 304 and 308 can be integrated into a compute device, such as a personal computer, a server, a mobile device, or a network element. The engines 304 and 308 can be integrated via circuitry or as installed instructions into a memory resource of the compute device. Any appropriate combination of the system 300 and compute devices can be a virtual instance of a resource of a virtual shared pool of resources. The engines and/or modules of the system herein can reside and/or execute "on the cloud" (e.g., reside and/or execute on a virtual shared pool of resources). For example, a hypervisor can be adapted to schedule resources using processor resource groups 332.
0032] In the discussion herein, the engines 104 and 108 of Figure 1 ; the modules 204, 206, and 208 of Figure 2; and the engines 304, 306, 308, 322, and 324 are described as circuitry or a combination of circuitry and executable instructions. Such components can he implemented in a number of fashions. Looking at Figure 2, the executable instructions can be processor-executable instructions, such as program instructions, stored on the memory resource 220, which is a tangible, non-transitory computer-readable storage medium, and the circuitry can be electronic circuitry, such as processor resource 210, for executing those Instructions. The instructions residing on the memory resource 220 can comprise any set of instructions to be executed directly (such as machine code) or indirectly (such as a script) by the processor resource 210, |Q033| Referring to Figures 1-3, the engines 104, 106, and 308 and/or the modules 204, 208, and 208 can be integrated in a single compute device or distributed across multiple compute devices. The engine and/or modules can complete or assist completion of operations performed in describing another engine and/o module. For example, the processor resource assignment engine 304 of Figure 3A can request, complete, or perform the methods or operations described with the processor resource assignment engine 104 of Figure 1 as weil as the process assignment engine 106 and the container engine 308 of Figure 1 , Thus, although the various engines and modules are shown as separate engines in Figures 1 and 2, in other implementations, the functionality of multiple engines and/or modules may be implemented as a single engine and/or module or divided in a variety of engines and/or modules. In som example, the engines of the system can perform example methods described in connection with Figures 4-6.
[0Q34J Figure 4 depicts example modules used to implement example scheduler systems. Referring to Figure 4, the example modules of Figure 4 generally include a container module 408, a process assignment module 406, and a processor resource assignment module 404. The example modules of Figure 4 can be implemented on a compute device to schedule a processes on a system with a pluralit of processor resources.
[00351 A processor resource request 458 is made to the system. A processor resource executing the container module 408 receives the processor resource request 458 and identifies in which container to place the processor resource request 458 based on the application behavior 460 of the task making the request 458 and any parameters 482, such as control parameters to facilitate a decision based on control group settings. |0036] The container module 408 represents program instructions that are similar to the container module 208 of Figure 2. The container module 408 can include program instructions, such as the application analysis module 440 and the application interface module 442, to facilitate the container selection decision. The application analysis module 440 represents program instructions that when executed by a processor resource cause the processor resource to determine whether a scheduler policy would be sufficient for the task of the processor resource request 458 based on the application behavior 460. The application interface module 442 represents program instructions that when executed by a processor resource cause the processor resource to accept user-supplied parameters, such as parameters 482, and determine a set of control parameters associated with the plurality of containers based on the parameters.
[0037] Th process assignment module 406 represents program instructions similar to the program instructions of the process assignment module 208 of Figure 2. The process assignment module 406 can include program instructions, such as the scheduler analysis moduie 444 and the scheduler change module 446, to facilitate assignment of processes to processor resource groups based on which container the processes are associated with. The scheduler analysis module 406 represents program instructions that when executed by a processor resource cause the processor resource to determine which scheduler to assign to the container based on a container description 463 and a scheduler list 464 containing a list of schedulers offered by the system. For example, a processor resource executing the scheduler analysis moduie 444 can identify a scheduler policy that conforms to the parameters and process characteristics of the processor resource request 458.The scheduler change module 448 represents program instructions that when executed by a processor resource cause the processor resource to identify whether sufficient resources exist to assign the task to the scheduler. For example, a processor resource executing the scheduler change module 448 can identify there is a lack of resources available to execute the selected scheduler, and, in response, select a different scheduler that next-best matche the parameters and/or process characteristics of the processor resource request 458 can be selected to assign the task to the secondary scheduler. For another example, resources may be available, but are not ye! allocated to the processor resource group before enqueuing the task.
[0038] The processor resource assignment module 404 represents program instructions that are similar to the processor resource assignment module 204 of Figure 2. The processor resource assignment modul 404 can include program instructions, such as the core monitor module 448, the core analysis moduie 450, and a core change module 452, to facilitate maintenance of the plurality of processor resourc groups. A processor resource executing the processor resource assignment module 404 can utilize the scheduler activity information 466, a core list 470 {which represents a list of processor resources of the system), and core activity information 472 (which represents the operational statistics of the plurality of processor resources of the system). The task associated with the processor resource request 458 is ushered to a run-queue of a processor resource in the processor resource group of the scheduler selected by the processor resource executing the process assignment module 408 via the processor resource run-queue assignment 474 operation. [8039J The core monitor module 448 represents program instructions that when executed by a processor resource cause the processor resource to monitor the assignment of processor resource to schedulers and the set of process tasks hosted by processor resources associated with the schedulers. For example, the demand level and the utilization level of the processor resources associated with a scheduler can be observed by a processor resource executing the core monitor module 448.
|O04O| Th core analysis module 450 represents program instructions that when executed by a processor resource cause the processor resource to analyze the demand level of the processor resources associated with the applications using the scheduler. For example, the demand level of th processor resources can be compared to a QoS threshold. The demand levels of each processor resources can be aggregated to scheduler demand level.
[0041] The core change module 452 represents program instructions that when executed by a processor resource cause the processor resource to maintain the space partition of the plurality of processor resources based on the scheduler demand level. A processor resource executing the core change module 452 can facilitate a change in the space partition by migrating tasks from the run-queues of any processor resources designated to change to other processor resources of the same processor resource group. The processor resource executing the core change module 452 can verify the run-queues of the selected processor resources are empty and change the processor resources with empty run-queues to the processor resource group of a different scheduler. A load balance technique can be used by the processor resource that executes the cor change module 452.
[0042] Figures 5 and 6 are flow diagrams depicting example methods of resource scheduling. Referring to Figure 5; exampl methods of resource scheduling can generally comprise identifying a scheduler for the task, assigning a processor resource group to the scheduler., and enqueuing the task on a run-queue of a processor resource in the processor resource group.
[0043] At block 504, a scheduler for the task is identified. The scheduler is identified based on the control parameter associated with a task characteristic. In this manner, the task characteristic should accurately describe the behavior associated with the task and/or the application from which the task was derived so that the appropriate scheduler Is identified for the tasks of the application. However, a task can be designated to a processor resource group that Is different from the application and/or another task of the application.
[00443 At block 508, a processor resource group Is assigned to the scheduler. The processor resource group can b assigned based on the scheduler activity information. For example, if the scheduler became flagged to operate when a task is assigned to the scheduler, then the state of the scheduler would he changed to active and should have a processor resource group associated with scheduler. A processor resource group can be created for a scheduler when a schedule is not associated with a processor resource group and the scheduler is assigned a task.
[0045] At block 508, the task is enqueued on a run-queue of a processor resource in the assigned processor resource group, A task is enqueued by placing the task into a queue. The run-queue is managed by the scheduler and the task receives access to the processor resource based on the strategy of the scheduler policy. For example, the task can be moved to the front of the queue when the task has a highest priority level set and the scheduler policy takes priority into consideration, whereas a fair-share policy may send the same task to the tail of the queue when the fair-share scheduler policy does not take priority into consideration.
Ο04β] Figure 8 includes blocks similar to blocks of Figure 5 and provides addstionai blocks and details. In particular, Figure 6 depicts additional blocks and details generally regarding selecting a container for a task and maintaining a processor resource group. Blocks 604 and 608 are similar to blocks 504 and 508 of Figure 5 and, for brevity, their respective descriptions are not repeated in their entirety. Block 608 represents an embodiment of block 506 as represented by blocks 812, 814, and 618 where the specific descriptions of blocks 81 , 614, and 616 are encompassed by the genera! description of block 508.
0047| At block 602, a container is selected for the task. The container Is selected based on an application characteristic associated with the task. For example, a word processing application can operate with equal priority to other applications on the computer and be placed in a container with a description associated with normal
18 resource usage. Fo another example, a content streaming application can require a certain amount of resources based on the speed of buffering and can be added to a container described with parameters for content streaming. The container description can Include a control parameter associated with the application characteristic, such as a "real-time processing* parameter associated with a "streaming" characteristic. The scheduler is identified based on the container description associated with the container, in this manner, the container description should accurately describe the applications associated with the container so that the appropriate scheduler is identified for the tasks of the applications
[004S] At block 606, assignment of the processor resource group to a scheduler based on scheduler activity information can Include initiating the scheduler and creating the processor resource group. At block 812, a scheduler flag is set. The setting of the flag can identify to the operating system that the scheduler is available to schedule tasks on a processor resource. The scheduler can assign a task to a run-queue when the scheduler flag is set,
0 9] A processor resource group may need to be created for the scheduler. At block 814, a run-queue and setting information is created. The run-queue and setting information are associated with a processor resource of the processor resource group to allow for the processor resource to accept management policy operations from the scheduler. The run-queue and setting information may be maintained based on the status on the scheduler flag, such as when the scheduler flag is set.
[0050] A processor resource group can adjust based on scheduler activity information. At block 818, a number of processor resources of a first processor resource group is changed based on the scheduler activity information. The number of processor resources assigned to a processo resource group can vary dynamically based on the scheduler activity information, a QoS parameter, and a number of tasks in assigned to a processor resource (e.g.., the number of tasks in a run-queue of a processor resource).
[00S1] At block 610, a processor resource is reassigned based on scheduler activity information. Processor resource allocation and container assignment may be adjusted dynamically during runtime. For example, scheduler activity information can be updated based on user input or a system event and the space partition and/or the number of containers assigned fo a processor resource group should adapt to the update.
[0052J At block 820, a plurality of processor resources are monitored and the scheduler activity information is gathered from the plurality of processor resources at block 822, The scheduler activity information can be collected based on a demand level and utilization level of the processor resources associated with the processor resource group assigned to the scheduler. For example, processor resource demand levels and utilization levels that achieve certain demand and/or utilization minimums.
0053] At block 824, a space partition of the plurality of processor resource groups is changed based on the scheduler activity information gathered at block 622. At block 828, any queued tasks of first run-queue of a first processor resource are migrated to a second run-queue of a second processor resource in the same processor resource group as the first processor resource. This happens because the tasks are fo be executed against the originally associated scheduler, hut the first processor resource Is being assigned to another processor resource group. At block 628, the first run-queue information of the first run-queue is replaced with different run-queue information associated with a different scheduler based on the processor resource grou to which the first processor has moved to. The run-queue information should be replaced when the run-queue is empty as to not interfere with operations of the processor resource. The update to the space partition can be accomplished when the processor resource designated to chang processor resource groups is free from current processes and the run-queue can receive a process once the setting information is updated with the new scheduler information associated with the processor resource group it joined, in this manner, space partitioning of the plurality of processor resources can be achieved dynamically during runtime, schedulers can be flexibly added or removed from a system, and multiple types of schedulers can manage processes concurrently on the same system.
[0054] Although the flow diagrams of Figures 4-8 Illustrate specific orders of execution, the order of execution may differ from that which is illustrated. For example, the order of execution of the blocks may be scrambled relative to the order shown. Also, the blocks shown in succession may be executed concurrently or with partial concurrence. All such variations are within the scope of the present description.
OSSJ The present description has been shown and described with reference to the foregoing examples, li Is understood: however, that other forms, details, and examples may be made without departing from the spirit and scope of the following claims. The use of the words "first," "second," or related terms in the claims are not used to limit the claim elements to an order or location, but ar merely used to distinguish separate claim elements.

Claims

CLAIMS What is claimed is:
1. A scheduler system comprising:
a plurality of processo resources:
a processor resource assignment engine to maintain a plurality of processor resource groups based on scheduler activity information, each processor resource of the plurality of processor resources assignable to one of the plurality of processor resource groups at runtime and managed by a scheduler assigned to the one of the plurality of processor resource groups;
a process assignment engine to:
assign a processor resource request to the one of the plurality of processor resource groups based on a set of process characteristics and a scheduler policy;
identify a first processor resource of the plurality of processor resources assigned to the one of the processor resource groups; and
enqueue a process associated with the processor resource request on a run- queue of the first processor resource based on a strategy of the scheduler policy;
2. The scheduler system of claim 1 , comprising:
a container engine to maintain a plurality of containers, each container of the plurality of containers to contain a grou of processes assignable to one of the plurality of processor resource groups.
3. The scheduler system of claim 2, wherein the container engine is further to:
reassign a first container of the plurality of containers from a first processor resource group of the plurality of processo resource groups to a second processor resource group based on a change in a container description associated with th set of process characteristics. The scheduler system of claim 2, wherein the container engine comprises at least one of:
an application analysis engine to infer which scheduler best matches the behavior of an application associated with the processor resource request; and
an application interface engine to enable user-supplied parameters to determine a set of control parameters associated with the plurality of containers,
The scheduler system of claim 1 , wherein the processor resource assignment engine is further to:
reassign the first processor resource from a first processor resource group of the plurality of processor resource groups to a second processor resource group of the plurality of processor resource groups based on at least one of:
an active status of the scheduler;
a change in a control group setting associated with an application of the processor resource request: and
a load balance strategy,
A computer-readable storage medium comprising a set of instructions executable by a first processor resource to:
assign a processor resource group to a scheduler;
maintain th processor resource group with a number of processor resources based on scheduler activity information during runtime;
determine a scheduler policy for a task based on a control parameter; and enqueue the task to a run-queue of a second processor resource of the processor resource group assigned to the scheduler based o the determined scheduler policy.
The medium of claim 6, wherein the set of instructions is executable by the first processor resource to: analyze a space partition of a plurality of processor resources based on the scheduler activity information and the control parameter of the task, the space partition representing disjoint subsets of the plurality of processor resources;
create the processor resource group when a threshold level of processor resources are available and the space partition lacks a subset for the scheduler; and create the run-queue for the second processor resource associated with the processor resource group. 8. The medium of claim 6, wherein the set of instructions to maintain the processor resource group is executable by the first processor resource to:
reassign a third processor resource of a plurality of processor resources from a first processor resource group to a second processor resource group; and
move the task from a first run-queue to a second run-queue when the third processor resource is reassigned to the second processor resource group, 9. The medium of claim 8. wherein the sef of instructions is executable by the first processor resource to:
analyze a demand level and utilization level of the processor resource group: and determine whether the number of processor resources of the processor resource group are available to host the task based on the scheduler policy associated with the processor resource group and a quality-of-servlce threshold. 10. The medium of claim 6, wherein the set of instructions Is executable by the first processor resource to:
analyze a behavior of the task;
determine a set of process characteristics for the task based on the behavior; and
identify which one of a plurality of schedulers is associated with the control parameter satisfied by the set of process characteristics, 11. A method of resource scheduling com ising; identifying a first scheduler of a plurality of schedulers for the task based on a control parameter associated with a task characteristic:
assigning a first processor resource to a first processo resource group of a plurality of processor resource groups based on scheduler activity information, the first processor resource group managed by the first scheduler; and
enqueuing the task on a first run-queue of the first processor resource in the first processor resource group assigned to the first scheduler. 12 The method of claim 11 s comprising:
selecting a first container of a plurality of containers for th task based on an application characteristic associated with the task, the first container to contain a group of processes,
wherein the first scheduler is identified based on a container description associated with the first container, the container description to include the control parameter and the task characteristic to comprise the application characteristic,. 13. The method of claim 11 , comprising:
setting a scheduler flag to caus the first scheduler to assign the task to the first run-queue based on a scheduler policy;
creating the first run-queue and setting information associated with the first run- queue when the scheduler flag is set; and
changing a number of processor resources of the first processor resource group based on the scheduler activit information, a quaiity-of-service parameter, and a number of tasks in the first run-queue. 14. The method of claim 11 , comprising:
monitoring a plurality of processor resources associated with the plurality of processor resource groups;
gathering the scheduler activity information based on a demand level and utilization level of the plurality of processor resources; and changing a space partition of the plurality of processor resource groups based on the scheduler activity information. , The method of claim 14, comprising:
migrating a queued task of the first run-queue to a second run-queue associated with a second processor resource of the first processor resource group: and
replacing a first run-queue information of the first run-queue with a second run- queue information associated with a second scheduler of the plurality of schedulers when the first run-queue is empty..
PCT/US2015/012730 2015-01-23 2015-01-23 Scheduler-assigned processor resource groups WO2016118164A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/012730 WO2016118164A1 (en) 2015-01-23 2015-01-23 Scheduler-assigned processor resource groups

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/012730 WO2016118164A1 (en) 2015-01-23 2015-01-23 Scheduler-assigned processor resource groups

Publications (1)

Publication Number Publication Date
WO2016118164A1 true WO2016118164A1 (en) 2016-07-28

Family

ID=56417534

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/012730 WO2016118164A1 (en) 2015-01-23 2015-01-23 Scheduler-assigned processor resource groups

Country Status (1)

Country Link
WO (1) WO2016118164A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109791504A (en) * 2016-09-21 2019-05-21 埃森哲环球解决方案有限公司 For the dynamic BTS configuration of application container
US11138146B2 (en) 2016-10-05 2021-10-05 Bamboo Systems Group Limited Hyperscale architecture
US11861397B2 (en) 2021-02-15 2024-01-02 Kyndryl, Inc. Container scheduler with multiple queues for special workloads
US11979339B1 (en) * 2020-08-19 2024-05-07 Cable Television Laboratories, Inc. Modular schedulers and associated methods

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040010612A1 (en) * 2002-06-11 2004-01-15 Pandya Ashish A. High performance IP processor using RDMA
US20080155203A1 (en) * 2003-09-25 2008-06-26 Maximino Aguilar Grouping processors and assigning shared memory space to a group in a heterogeneous computer environment
US20100333113A1 (en) * 2009-06-29 2010-12-30 Sun Microsystems, Inc. Method and system for heuristics-based task scheduling
US20120173728A1 (en) * 2011-01-03 2012-07-05 Gregory Matthew Haskins Policy and identity based workload provisioning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040010612A1 (en) * 2002-06-11 2004-01-15 Pandya Ashish A. High performance IP processor using RDMA
US20080155203A1 (en) * 2003-09-25 2008-06-26 Maximino Aguilar Grouping processors and assigning shared memory space to a group in a heterogeneous computer environment
US20100333113A1 (en) * 2009-06-29 2010-12-30 Sun Microsystems, Inc. Method and system for heuristics-based task scheduling
US20120173728A1 (en) * 2011-01-03 2012-07-05 Gregory Matthew Haskins Policy and identity based workload provisioning

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109791504A (en) * 2016-09-21 2019-05-21 埃森哲环球解决方案有限公司 For the dynamic BTS configuration of application container
CN109791504B (en) * 2016-09-21 2023-04-18 埃森哲环球解决方案有限公司 Dynamic resource configuration for application containers
US11138146B2 (en) 2016-10-05 2021-10-05 Bamboo Systems Group Limited Hyperscale architecture
US11979339B1 (en) * 2020-08-19 2024-05-07 Cable Television Laboratories, Inc. Modular schedulers and associated methods
US11861397B2 (en) 2021-02-15 2024-01-02 Kyndryl, Inc. Container scheduler with multiple queues for special workloads

Similar Documents

Publication Publication Date Title
CN109471727B (en) Task processing method, device and system
US12107769B2 (en) Throttling queue for a request scheduling and processing system
CN110249311B (en) Resource management for virtual machines in cloud computing systems
CN110249310B (en) Resource management for virtual machines in cloud computing systems
CN109936604B (en) Resource scheduling method, device and system
US11113782B2 (en) Dynamic kernel slicing for VGPU sharing in serverless computing systems
CN106936883B (en) Method and apparatus for cloud system
EP3698247B1 (en) An apparatus and method for providing a performance based packet scheduler
US8689226B2 (en) Assigning resources to processing stages of a processing subsystem
US20200174844A1 (en) System and method for resource partitioning in distributed computing
CN109564528B (en) System and method for computing resource allocation in distributed computing
US20140007093A1 (en) Hierarchical thresholds-based virtual machine configuration
US20190386929A1 (en) Systems and methods for controlling process priority for efficient resource allocation
US10630600B2 (en) Adaptive network input-output control in virtual environments
CN110221920B (en) Deployment method, device, storage medium and system
US20150186256A1 (en) Providing virtual storage pools for target applications
JP2013125548A (en) Virtual machine allocation system and method for using the same
WO2016118164A1 (en) Scheduler-assigned processor resource groups
Rossi et al. Elastic deployment of software containers in geo-distributed computing environments
US10733024B2 (en) Task packing scheduling process for long running applications
CN111930516B (en) Load balancing method and related device
US11521042B2 (en) System and method to dynamically and automatically sharing resources of coprocessor AI accelerators
US10956228B2 (en) Task management using a virtual node
CN116157778A (en) System and method for hybrid centralized and distributed scheduling on shared physical hosts
Younis et al. Hybrid load balancing algorithm in heterogeneous cloud environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15879188

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15879188

Country of ref document: EP

Kind code of ref document: A1