[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2004077214A2 - System and method for scheduling server functions irrespective of server functionality - Google Patents

System and method for scheduling server functions irrespective of server functionality Download PDF

Info

Publication number
WO2004077214A2
WO2004077214A2 PCT/IN2004/000025 IN2004000025W WO2004077214A2 WO 2004077214 A2 WO2004077214 A2 WO 2004077214A2 IN 2004000025 W IN2004000025 W IN 2004000025W WO 2004077214 A2 WO2004077214 A2 WO 2004077214A2
Authority
WO
WIPO (PCT)
Prior art keywords
instructions
recited
server
resources
computing system
Prior art date
Application number
PCT/IN2004/000025
Other languages
French (fr)
Other versions
WO2004077214A3 (en
Inventor
Vinayak K. Rao
Original Assignee
Vaman Technologies (R & D) Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vaman Technologies (R & D) Limited filed Critical Vaman Technologies (R & D) Limited
Publication of WO2004077214A2 publication Critical patent/WO2004077214A2/en
Publication of WO2004077214A3 publication Critical patent/WO2004077214A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues

Definitions

  • a multi-user technology is one that allows more than one user or client to connect to the server and simultaneously perform tasks with other users sharing the same server resources.
  • One of the earliest known multi-user technologies available for user computing was the Operating System.
  • the multi-user functionality for any server is implemented to carry out centralized control and management, for security and for commonplace of data storage and for sharing excellent server resources across dumb clients.
  • the above is also known as the Client Server Technology in which dumb clients use highly resourceful and intelligent servers to achieve automation.
  • the earlier clients were inexpensive relative to the server, having minimum processing power and were useful just for data accessing purposes like the input and the output.
  • the existence of dumb clients and the need to implement business rules had resulted in server functionalities and features being pushed more to the servers and even the simplest validations were server-driven.
  • the present invention provides a software-implemented process, system, and method for use in a computing environment.
  • the present invention consists of a system and method in which server scheduling functions are decomposed as objects whose features and functionalities, which are currently implemented using standard structural programming could be implemented as a series of events or finite state machines.
  • server scheduling functions are decomposed as objects whose features and functionalities, which are currently implemented using standard structural programming could be implemented as a series of events or finite state machines.
  • each functional server grappled with the same issues of Disk Management, Memory Management, Connection Management and CPU sharing across concurrent client requests.
  • the proposed invention is a resource scheduler, which abstracts and maps patterns of functional usage with patterns of data, its state transition during these functional usage and derives a relationship wherein each resource usage irrespective of any functional server on any operating system and any vendor fall into a finite set of object and state entities for specific server resources and can be reused and applied generic irrespective of derived functionalities.
  • Network Agent just manages receipts and transmissions (functional pattern) of network packets irrespective of protocols (patterns of data) and size of data (i.e. data carrying capacity per
  • Disk Agent manages basic file operations like Open, Close, Read, Write, Seek etc... (functional pattern) irrespective of data patterns (Swap, Rollback Segment, Log, Dbf) and size of data (i.e. static data - fixed size metadata and dynamic data - data in tuples of user defined objects)
  • the scheduler using set of messages, events and action map manages these agents. i.e.
  • Each Server ⁇ Agents (Set of Agents)
  • Each Agent ⁇ Modules + ⁇ States + ⁇ Events (Set of Modules, States, Events)
  • Each Module ⁇ Functions (Set of Functions)
  • Server Object ⁇ Memory + ⁇ Disk + ⁇ CPU + ⁇ Network + ⁇ Timer
  • FIG. 3 depicts the multifunctional server as a composition of set of resource agents and the CPU is decomposed into functional servers such as Database, Web, Mail etc... These derived server are further set of agents required to deliver server specific functionality. Also shown in the figure are common agents such as Dispatcher and Scheduler Agent 305. Hence the GUI representation shown the functional derived of an agent as set of modules executed under a given event under a given state.
  • the entire perception of an object begins from the initiation of a client request, undergoing a series of data transformations as per request options and parameters, linked or synchronized as options specified in request and finally generating resultant response buffer.
  • This is analogous to manufacturing based on order (user input query) wherein the raw materials (CPU, Disk, Network, CPU, Timer) under goes a series of transition under various item (query parameters i.e. options and values) compositions to deliver the final finished good(s) (resultant buffer).
  • query parameters i.e. options and values
  • the design and implementation takes care of recursions and locking of a resource for a prolonged time.
  • the scheduler and object event implementations are divided into functionally small finite states linked and sequenced to deliver object functionality.
  • This implementation design hence requires minimum locking and delivers ' maximum CPU utilization because instead of functions calling functions like in linear programming here functions become tasks/state machine which deliver functionality but simultaneously allows scheduler to dynamically prioritize events.
  • the entire principle enforced in designing agents or functional modules is to enforce preemption by design rather than time based scheduling.
  • the design of the scheduler kernel is based on co-operative multithreading hence concurrency is achieved by a yield based kernel scheduled by priority rather than time driven kernel which has a predefined period of preemption.
  • Agent ⁇ Modules + ⁇ States + ⁇ Events (Set of Modules, States, Events)
  • Agent ⁇ Modules ⁇ + ⁇ Modules 2 + ⁇ Modules 3 + + ⁇ Modules ⁇ 0 o
  • Each Module being a collection of functions will have a MIN to MAX time of execution based on the extreme values of iterative parameters passed i.e.
  • Agent Execution Time ⁇ Modules! + ⁇ Modules 2 + + ⁇ Modules 100 Max Max Max
  • Agent minimum execution time is the summation of best minimum timing and maximum execution time is the summation of best maximum timing.
  • agent wise timing which with an approximation of error percentage could be derived under defined resource restrictions i.e. assuming a 1Ghz CPU, 128Mb RAM, 40Gb Hard disk drive and 100 Mbps network card etc... and gauging agent wise performance which use these resources under varying needs and ratios.
  • This equation or mathematical model representation of a resource agent or functional agent helps manage and optimize best performance of server in concurrent or distributed environment under given resource limitation and helps the scheduler to optimize the best way to service a request query in fastest possible response time. Also because of this unique approach the software implementation and any derived entities based on this principle can be easily fabricated to an application-specific integrated circuit (ASIC) or Very Large Scale Integrated (VLSI) chip unlike other schedulers or functional servers.
  • ASIC application-specific integrated circuit
  • VLSI Very Large Scale Integrated
  • FSM Finite State Machine
  • FIG 1 is a block diagram depicting the functional blocks of the scheduler of the preferred embodiment of the invention.
  • FIG 2 is a flow diagram illustrating the scheduler carrying out the server scheduling functions.
  • FIG 3 is a screenshot illustrating the scheduler agent carrying out the scheduling functions.
  • Fig 1 is a block diagram depicting the functional blocks of the scheduler of the preferred embodiment of the invention.
  • the present invention has separate agents like the Network Agent 100 for network hardware and capable of servicing every network request irrespective of protocols configured on server or hardware supported. This is the primary block through which every request source reaches the Scheduler Agent 102 and response to requests are obtained.
  • the Scheduler Agent 102 upon receiving any request has to locate and isolate source of request (basically protocols) and nature of clients i.e. Browser, email client, ODBC compliant client etc.
  • agents for each resource as a manager to source and sink events and deliver functionality expected from the resource as per command options irrespective of functional servers using the same resource.
  • agents for managing disk, network, memory, timer and CPU functionality which actually delivers various combinational servers is decomposed into server specific functional agents such as DML, Server, HTTP etc.
  • the network agent 100 has events and states defined to receive and transmit packets of request or response data irrespective of protocol or underlying hardware i.e. NIC (network interface card) or DUN (dial up network).
  • NIC network interface card
  • DUN dial up network
  • the Network agent creates protocol specific threads, which are binded and configured to enable functionality expected from desired servers using these protocol as its native communication interface.
  • the Network Module 112 creates the DB Threads 114 such as TCP/IP, IPX SPX, UDP, Named pipes or the Web Threads 116 like FTP or HTTP. For every request the worker threads are created that is one thread per protocol is created. As per the analysis of the incoming data a thread is assigned per protocol. For example each http request is given to a new http thread.
  • the Timer Agent 138 is mainly used to force synchronicity and prevent any asynchronous events to run astray. Some periodic functionality checks for monitoring resource usage by current process and well as by other processes running on the server currently are managed by the timer agent. Also as per number of unflushed committed transactions and size of this uncommitted data a periodic checkpointing is also triggered by the timer agent.
  • the Disk Agent 140 is primarily responsible for archiving data as well as managing intermediate data files like log files (committed transactional data) or rollback segments (uncommitted transactional data) or other configuration files.
  • the uniqueness of this agent is that irrespective of different functional servers, which the Scheduler Agent 102 manages the persistent data generated or used is managed by the disk agent analyzing patterns of data as per functional scope that is irrespective of server data patterns the final persistent data is an ODBC compatible data type or its combination. This makes data exchange seamless and SQL queries, which could never work on web / mail objects can now be accessed as functional extension of database server.
  • the Resource Analyzer 124 is responsible for allocating resources such as the Disk, the Network, the Random Access Memory (RAM), and the Central Processing Unit (CPU). For execution of any of these commands check is any resource is required either RAM or Disk space etc the resource analyzer 124 notifies to the cache manager or disk manager to allocate and manage resource for the request execution calculating requirements based on options specified in the client request. Since the scheduler is configured to be a unified scheduler, working irrespective of the nature of the server, the resource management between concurrently logged clients (Database, Web or Mail) becomes the primary objective of the scheduler. We classify the basic resources disk, network, RAM, timer and CPU as the primary resources, which are used as per the behavioral pattern of the functional server in various combinations as desired.
  • Various constraints are validated during every state entity of object or transaction based on the parameter settings of resource usage set by the user, state of available versus free resources, quantity of the resource and time for which the resource is available. Any translation of resource between primary or secondary is decided by the resource analyzer and the RAM is swapped or retrieved accordingly.
  • transition data states before, during and after being committed are managed in various disk pattern storages like rollback segment, log file and final database file.
  • the kernel Since the scheduler kernel design is primarily asynchronous, yield based priority scheduling of any set of tasks, which requires synchronicity is enforced using the timer agent.
  • the kernel therefore, is designed to be yield based, wherein the method of deciding the priority of the query received and scheduling server resources is based primarily on the yield of the query received which makes it possible to utilize the CPU most efficiently by relinquishing the resource once the query has been processed and executed.
  • the Dispatcher Agent 104 is responsible in case of sending outgoing request and buffering the request as per requirements.
  • the Global Cache 106 is used as virtual memory available to any of the requesting operations.
  • the primary function of this dispatcher agent 104 is to help the resource analyzer 124 synchronize the "RAM" and "Network" for every nature of response. This may vary from a server side cursor created by any database client to a frequently used web page of a web server client.
  • the dispatcher agent 104 optimizes the memory usage across repetitive client requests by synchronizing cached objects and various states of frequently used data.
  • the Parser 108 is responsible for translating, analyzing, evaluating, validating any syntactic or semantic errors and parsing the client request that could either be SQL or XQL or OQL. In other words it manages to analyze and parse requests or commands irrespective of the nature of the clients, which could be either a web client or an ftp client or an ODBC compliant client or a mail client etc.
  • the DAT 110 is a repository for various objects like parser dictionary, message and error strings, and basic metadata definitions.
  • the repository has also some decision matrix data which is a ratio of response time for various data structure operations like search, insert, delete across algorithms and its derivatives like trees, hash tables, linked lists with respect to loads of data v/s resource constraints.
  • decisions taken by scheduler and resource analyzer 124 are based on the anticipated resultant time-ratio as dictated by the values in these DAT 110 structures and stored in Look-Up Tables (LUTs).
  • the License Manager 118 Before execution of any successfully parsed command the License Manager 118 checks constraints for execution of the command as dictated by the purchasing licenses. A license file decides the constraints validated by the license manager. This license file is shipped as per the purchase details like server edition, user licenses, OS etc.
  • the Audit Manager 120 tracks and logs every action and result as per the audit parameter settings for the request. This helps in tracking usage of the object and the operation irrespective of function of server that is Database Server, Web Server, Mail Server etc and enhance security.
  • the Command Analyzer 122 decides upon the flow of execution of the request and passes control to the respective agent as per the scope of work expected and options specified in the command. In the event of local request the command is analyzed by local command Analyzer.
  • the DML Agent analyzes the DML command and the Server Agent analyzes the server command.
  • OOOV Operation, Object, Option and Value
  • the command analyzer 122 decides the flow of execution and messages the appropriate agents to execute the required tasks.
  • the granularity of state machines and its modularity mapping with options of object functionality ensures that only optimum code gets executed as per user query options and effectively utilizes CPU to a minimum.
  • the state machine based scheduler dynamically decides program flow execution rather than the programmer incorporating it in the program code itself.
  • the command analyzer 122 decides this flow in real time based on few heuristics analyzed by the table generated by the virtual statistics 130 and associates a series of functional event patterns to execute for various agents as needed by the query.
  • the Messaging 126 is a way in which one has to communicate between two of the states like either sourcing a request or sinking a request in the case of an event.
  • the Priority Queuing 128 is based on task according to priority that can be scheduled by the kernel.
  • the scheduler can raise the priority so as to get maximum CPU time.
  • a series of client requests, based on the source of request i.e. whether HTTP/ FTP/ SMTP / POP requests) have various protocol timeouts pre-assigned.
  • Database clients also have query and connection timeouts. Hence these timeout constraints force the scheduler to execute and update the clients about their requests on an on-going basis and work on the process for execution.
  • the scheduler must ensure that none of these clients timeout for need of resources or server attention.
  • the criteria's based on which the scheduler arranges queries in the priority queue 128 include source request protocol, nature of query - (DDL / DML etc), complexity of query (joins, conditions etc), state of objects in the query (under transactional lock, dropped etc), resource availability (RAM, Disk etc), Burst or workload of requests within unit time, response timeout, transitional resources required before deriving final result buffer, size of response data and size of cache.
  • a module is a group of atomic functions, which is scheduled by the scheduler.
  • Any functional agent such as DML
  • DML is a set of sequenced modules to deliver DML functionality (such as select / insert / update / delete).
  • the scheduler tracks each query flow across various functional server agents and changes the priorities dynamically due to any unfavorable condition during execution (such as need for lock on an object). This makes queuing unique and optimizes CPU usage. Any change in flow required, to change the state entity of a query being currently executed, can be messaged and monitored across the flow between the various agents or can be aborted to relinquish certain resources.
  • Virtual Statistics 130 run and have to maintain global data about the number of request like number of Insert, Delete etc. Also currently how many Select etc is in memory are maintained in the virtual statistics in a virtual table.
  • the Virtual Statistics 130 is primarily a derivative of various historical and real-time statistical analysis of operations on objects. A lot of heuristics required to predict resource pattern usage is analyzed and updated per operation on any object in the server. This process helps in mapping a pattern of resource flow across various modules of agents required to deliver various server functionalities. The heuristics are calculated based on resource specific functional agents like disk, network, timer etc. and are formulated into a mathematical model to derive functional throughput (i.e. per unit time per unit cost per unit of measurement).
  • the Error Handling & Notification block 132 looks after the error handling, reporting or notification.
  • Display Log and Statistical Log block 134 handles how the display for example an HTML (Hyper Text Markup Language) file in case of a browser are handled and also the Statistical Log is maintained.
  • HTML Hyper Text Markup Language
  • HTTP agent 136 does this translation of query data to html tagged syntaxes.
  • Script agent in conjunction with HTTP agent manages validation and execution of web scripts like ASP/JSP etc. »
  • the Server Agent 142 is mainly involved for DDL / DCL operations.
  • the server agent manages any object creation, alteration or deletion of objects. It also executes any DCL commands for statistical history or for either query or object analysis.
  • the DML Agent 144 executes only DML specific queries. As soon as a syntactically valid client request reaches the agent it validates the needs of the query objects and returns an error incase the object is invalid. It then analyses the requirements as per the response buffer expected and allocates resource required for execution. The cursor type specified by the client request and size of anticipated result after or during query execution typically dictates this. If the queries require some expression to be evaluated or special functions to be executed on operands (i.e. mathematical / logical) the DML Agent organizes the resources and executes.
  • the flow diagram illustrates the preferred embodiment of the present invention explaining the process by which the system carries out the scheduling and integrating various server functionalities and extending this functionalities for seamless data and functionality interchange.
  • the process begins with the Scheduler Agent 102 receiving a Client request 200, such as 'CONNECT'.
  • the Scheduler agent 102 then proceeds to isolate the source and nature of the request 202.
  • the queries received are dynamically ordered in a Priority Queue based on predetermined conditions. As and when new queries are received the Priority Queue evaluates the nature and source of those queries to rearrange the ordering of the queries in the Priority Queue.
  • the Scheduler Agent 102 After completing the isolation of the request, the Scheduler Agent 102 then proceeds to check if the request is a Scheduler command 204. In the event that the request is not a Scheduler Command, the Scheduler generates an error code 206. In case the request is a Scheduler command, it needs to be executed beyond the scope of the scheduler functionality and the Scheduler agent 102 notifies various agents as per the request scope. Hence a web request, ODBC request or mail client request is directed to the respective agent such as database server or a web server or a mail server.
  • the Scheduler Agent 102 proceeds to analyze if it a DML command 210. In the event that it is a DML command it is given 212 to the DML Agent 144. In the event it is a non-DML command as per request parameters and options the request is given 214 either to Server Agent 142 for DDL or DCL requests. On giving the request to the Server Agent, the request is then checked for having any sub DML queries 216. In the event that the request does not have any sub DML queries, it is passed 212 to the DML Agent 144.
  • the Scheduler Agent 102 forks a new thread for sub queries 218 and the sub query is passed 212 to the DML Agent 144.
  • the DML Agent 144 then proceeds to send the request 220 to the Resource analyzer 124, which allocates the resources such as RAM etc.
  • the Resource analyzer 124 then proceeds to check if RAM is required 222. In the event the RAM is required, the request is passed 224 to the Cache Manager 106 for processing. In the event of no RAM required, the Disk Agent 140 directly processes the request 226. After the processing done by the Cache Manager 106 and the Disk Agent 140, the request is executed 228, that is after the successful resource allocation the query is executed and the resultant buffer is prepared.
  • the request is then checked for Response to Transmit 230.
  • the request is passed 232 to the Network Agent 100, which transmits the result.
  • the result is then analyzed to ascertain whether it is a successful command 234.
  • an error handler is generated 236.
  • a response is sent 238.
  • the request is sent 232 to the Network Agent 100.
  • the result is then analyzed to ascertain whether it is a successful command 234.
  • an error handler is generated 236.
  • a response is sent 238.
  • the notification message data varies as per the nature of source request i.e. web clients get a html error page whereas an ODBC client gets an error code as per ODBC standard.
  • the Scheduler agent 102 checks if it is a web- based request 246, meant for a Web Server. If the request is for a Web Server, the Scheduler checks whether it is an http request 248. In case the request is an http request, it is given 250 to the Http Agent 136. Next the Http Agent 136 passes on this request for translation of the OQL to DML or DDL 252. In the event of non-http request, the Scheduler Agent 102 classifies it as an ftp request 254. In the event it is not an ftp request, it is classified as a an unknown client request 258.
  • every ftp command translates and maps to the database maintained virtual file system. After this translation to DDL or DML respectively the process to check for DML command and onwards is carried out.
  • the request On checking if the request if for a Web Server 246, in the event that it is not, it is checked to ascertain if it is a request for the Mail Server 256. If the request if not for a Mail Server, it is classified as an unknown client request 258. The request is classified as a SMTP/POP request 260 and it is then given for translation from OQL to DML or DDL 252.
  • FIG. 3 illustrates a screenshot of an embodiment of the present invention.
  • the Scheduler Agent 305 is is depicted with the functional servers and resources listed in the column on the left.
  • the GUI representation shown the functional derived of an agent as set of modules executed under a given event under a given state.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention relates generally to a system and method for scheduling and integrating various server functionalities such as the database management system or the web server or the mail server etc and extending these functionalities for seamless data and functional interchange without affecting any existing client applications for these individual servers.

Description

TITLE OF INVENTION
System and Method for Scheduling Server Functions Irrespective of Server Functionality
BACKGROUND OF THE INVENTION
A multi-user technology is one that allows more than one user or client to connect to the server and simultaneously perform tasks with other users sharing the same server resources. One of the earliest known multi-user technologies available for user computing was the Operating System. Generally the multi-user functionality for any server is implemented to carry out centralized control and management, for security and for commonplace of data storage and for sharing excellent server resources across dumb clients. The above is also known as the Client Server Technology in which dumb clients use highly resourceful and intelligent servers to achieve automation. The earlier clients were inexpensive relative to the server, having minimum processing power and were useful just for data accessing purposes like the input and the output. The existence of dumb clients and the need to implement business rules had resulted in server functionalities and features being pushed more to the servers and even the simplest validations were server-driven. As a result any server technology evolution gave rise to functional objects, which could be executed only on the server. For example in a database server stored procedures, packages, triggers were created to execute only on the server to deliver maximum performance rather than data and request shuttling between clients and servers. However the increasing popularity of the Personal Computer (PC) added a new paradigm to the Client-Server computing model, by creating intelligent terminals, which could do much more than a dumb client. PCs were very versatile and had lots of processing power and resources, which were nearly equal to the server. This brought about a significant change in the Distributed Computing Environment (DCE), where equally powerful PC's connected to the advanced servers. However, with the advent of intelligent and resourceful clients a lot of business logic validations and functionalities could be pushed to clients, which gave the server more time for meaningful tasks rather than data entry validations. This phenomenal change in server and client processing power demanded a need for a change in which server scheduling was performed. This little performance difference between clients and servers led to a lot of limitations in concurrency and the earlier kernel or scheduler designs followed by derived servers needed a drastic change, because they were plagued by legacy scheduling problems and speed of resources demanded from faster clients made old server scheduling design falter under concurrency load. The emergence of intelligent clients also gave rise to various other demands, such as the need for balancing or making optimum use of processing power of client resources in case server was overloaded with dynamic load balancing. Further, intelligent clients demanded interactivity rather than remaining passive during real time application usage. Also most of the servers built today generally use the underlying OS kernel features and extend functionalities as per individual functional purpose. Hence the OS kernel, Database kernel, Web Server kernel generally have the same behavior but in spite of being derivatives of the same parent kernel their functionality of data exchange cannot be seamlessly integrated. Accordingly, a need exists for a server, which could be message and event driven (interactive) rather than following standard structural programming and scheduling practices (passive). Hence there exists a need for extending the multi user resource sharing feature of a server across functionalities and devising a mechanism which could work on objects and interfaces rather than just a singular function of database server / web server / mail server etc.
SUMMARY OF THE INVENTION
To meet the foregoing needs, the present invention provides a software-implemented process, system, and method for use in a computing environment. The present invention consists of a system and method in which server scheduling functions are decomposed as objects whose features and functionalities, which are currently implemented using standard structural programming could be implemented as a series of events or finite state machines. We perceive a scheduler in a server with a totally different paradigm i.e. any multi user server i.e. Operating System, Database Server, Web server or Mail Server finally boiled down to sharing common server resources like RAM, CPU, DISK, TIMER and NETWORK between clients. Hence each functional server grappled with the same issues of Disk Management, Memory Management, Connection Management and CPU sharing across concurrent client requests. Hence the proposed invention is a resource scheduler, which abstracts and maps patterns of functional usage with patterns of data, its state transition during these functional usage and derives a relationship wherein each resource usage irrespective of any functional server on any operating system and any vendor fall into a finite set of object and state entities for specific server resources and can be reused and applied generic irrespective of derived functionalities.
Hence to achieve this design objective we create functional agents for each resource i.e. Disk Agent, Network Agent, Timer Agent etc... and mapped all combinational functionality patterns and data patterns into states and events to handle any combination of client request and response under various resource and workload constraints under set of algorithms suited best to deliver desired functionality from respective resource.
Example: Network Agent just manages receipts and transmissions (functional pattern) of network packets irrespective of protocols (patterns of data) and size of data (i.e. data carrying capacity per
Disk Agent manages basic file operations like Open, Close, Read, Write, Seek etc... (functional pattern) irrespective of data patterns (Swap, Rollback Segment, Log, Dbf) and size of data (i.e. static data - fixed size metadata and dynamic data - data in tuples of user defined objects)
The scheduler using set of messages, events and action map manages these agents. i.e.
Each Server = Σ Agents (Set of Agents)
Each Agent = Σ Modules + Σ States + Σ Events (Set of Modules, States, Events)
Each Module = Σ Functions (Set of Functions)
We perceive any object as a unique composition of these resources at any given state entity using a ratio of these resources in a predefined quantity from its time of creation, transition during usage till destruction when all the resources are relinquished back to the parent from which it had been derived (typically an operating system), i.e.
Server Object = ΣMemory + Σ Disk + Σ CPU + Σ Network + Σ Timer
Hence functional entities like database server, web server etc... apart from standard network or disk usage pattern also mapped CPU usage pattern in the function specific agents like DML agent, HTTP agent etc...using only CPU as a resource to deliver server specific desired functionalities. This can be perceived in a GUI representation as shown in Fig. 3. The figure depicts the multifunctional server as a composition of set of resource agents and the CPU is decomposed into functional servers such as Database, Web, Mail etc... These derived server are further set of agents required to deliver server specific functionality. Also shown in the figure are common agents such as Dispatcher and Scheduler Agent 305. Hence the GUI representation shown the functional derived of an agent as set of modules executed under a given event under a given state.
The entire perception of an object begins from the initiation of a client request, undergoing a series of data transformations as per request options and parameters, linked or synchronized as options specified in request and finally generating resultant response buffer. This is analogous to manufacturing based on order (user input query) wherein the raw materials (CPU, Disk, Network, CPU, Timer) under goes a series of transition under various item (query parameters i.e. options and values) compositions to deliver the final finished good(s) (resultant buffer). These events were addressed or evoked through messages and had few dependencies to be check for concurrency and shared resources. This gave rise to a totally asynchronous approach, which could lead to deadlock if events written locked a resource and was swapped by the kernel out of its task contest. Hence the design and implementation takes care of recursions and locking of a resource for a prolonged time. So like critical sections the scheduler and object event implementations are divided into functionally small finite states linked and sequenced to deliver object functionality. This implementation design hence requires minimum locking and delivers ' maximum CPU utilization because instead of functions calling functions like in linear programming here functions become tasks/state machine which deliver functionality but simultaneously allows scheduler to dynamically prioritize events. The entire principle enforced in designing agents or functional modules is to enforce preemption by design rather than time based scheduling. The design of the scheduler kernel is based on co-operative multithreading hence concurrency is achieved by a yield based kernel scheduled by priority rather than time driven kernel which has a predefined period of preemption. The estimation of a specific functionality delivery - either by a resource agent (network / disk etc..) or functional agent (DML/ HTTP etc..) is derived as follows: Consider an agent as a collection of 100 modules (under given state and events) i.e. Agent = Σ Modules + Σ States + Σ Events (Set of Modules, States, Events) Agent = Σ Modulesι + Σ Modules2+ Σ Modules3+ + ∑ Modulesι0o
Each Module being a collection of functions will have a MIN to MAX time of execution based on the extreme values of iterative parameters passed i.e.
Min Min in
Agent Execution Time = Σ Modules! + Σ Modules2+ + Σ Modules100 Max Max Max
Agent minimum execution time is the summation of best minimum timing and maximum execution time is the summation of best maximum timing. Hence an approximately average time of execution can be derived predicting state and events that would be executed under a given workload. This makes a nearly accurate estimation of agent wise timing, which with an approximation of error percentage could be derived under defined resource restrictions i.e. assuming a 1Ghz CPU, 128Mb RAM, 40Gb Hard disk drive and 100 Mbps network card etc... and gauging agent wise performance which use these resources under varying needs and ratios. This equation or mathematical model representation of a resource agent or functional agent helps manage and optimize best performance of server in concurrent or distributed environment under given resource limitation and helps the scheduler to optimize the best way to service a request query in fastest possible response time. Also because of this unique approach the software implementation and any derived entities based on this principle can be easily fabricated to an application-specific integrated circuit (ASIC) or Very Large Scale Integrated (VLSI) chip unlike other schedulers or functional servers.
To achieve our objective of seamless data and functionality interchange the patterns of persistent data used or created between these functional servers example web servers etc had to be in a common persistent format without compromising any functionality expected by web, mail or any database client. Also the sub objects of these differently functional servers should be able to communicate using messages or events so that irrespective of common server functionalities like concurrency, response timeouts, caching the resource utilization of the server should be scheduled by the kernel without compromising different architectural and functional designs.
The entire design is based on state machines and modules comprising of various events communication via messages that is it is event driven using Finite State Machine (FSM) concept, the functionality is broken down into a series of events scheduled by kernel.
BRIEF DESCRIPTION OF THE DRAWINGS
The various objects and advantages of the present invention will become apparent to those of ordinary skill in the relevant art after reviewing the following detailed description and accompanying drawings, wherein
FIG 1 is a block diagram depicting the functional blocks of the scheduler of the preferred embodiment of the invention.
FIG 2 is a flow diagram illustrating the scheduler carrying out the server scheduling functions.
FIG 3 is a screenshot illustrating the scheduler agent carrying out the scheduling functions. DETAILED DESCRIPTION OF THE INVENTION
While the present invention is susceptible to embodiment in various forms, as shown in the drawings & will hereinafter be described a presently preferred embodiment with the understanding that the present disclosure is to be considered an exemplification of the invention and is not intended to limit the invention to the specific embodiment illustrated.
In the present disclosure, the words "a" or "an" are to be taken to include both a singular and the plural. Conversely any reference to plural items shall, where appropriate include the singular. Referring now to the drawing particularly in Fig 1 is a block diagram depicting the functional blocks of the scheduler of the preferred embodiment of the invention.
As seen from the block diagram for controlling and managing each resource the present invention has separate agents like the Network Agent 100 for network hardware and capable of servicing every network request irrespective of protocols configured on server or hardware supported. This is the primary block through which every request source reaches the Scheduler Agent 102 and response to requests are obtained. The Scheduler Agent 102 upon receiving any request has to locate and isolate source of request (basically protocols) and nature of clients i.e. Browser, email client, ODBC compliant client etc.
As mentioned earlier we create agents for each resource as a manager to source and sink events and deliver functionality expected from the resource as per command options irrespective of functional servers using the same resource. Hence we have individual agents for managing disk, network, memory, timer and CPU functionality which actually delivers various combinational servers is decomposed into server specific functional agents such as DML, Server, HTTP etc...
The network agent 100 has events and states defined to receive and transmit packets of request or response data irrespective of protocol or underlying hardware i.e. NIC (network interface card) or DUN (dial up network). The Network agent creates protocol specific threads, which are binded and configured to enable functionality expected from desired servers using these protocol as its native communication interface.
The Network Module 112 creates the DB Threads 114 such as TCP/IP, IPX SPX, UDP, Named pipes or the Web Threads 116 like FTP or HTTP. For every request the worker threads are created that is one thread per protocol is created. As per the analysis of the incoming data a thread is assigned per protocol. For example each http request is given to a new http thread.
The Timer Agent 138 is mainly used to force synchronicity and prevent any asynchronous events to run astray. Some periodic functionality checks for monitoring resource usage by current process and well as by other processes running on the server currently are managed by the timer agent. Also as per number of unflushed committed transactions and size of this uncommitted data a periodic checkpointing is also triggered by the timer agent.
The Disk Agent 140 is primarily responsible for archiving data as well as managing intermediate data files like log files (committed transactional data) or rollback segments (uncommitted transactional data) or other configuration files. The uniqueness of this agent is that irrespective of different functional servers, which the Scheduler Agent 102 manages the persistent data generated or used is managed by the disk agent analyzing patterns of data as per functional scope that is irrespective of server data patterns the final persistent data is an ODBC compatible data type or its combination. This makes data exchange seamless and SQL queries, which could never work on web / mail objects can now be accessed as functional extension of database server.
The Resource Analyzer 124 is responsible for allocating resources such as the Disk, the Network, the Random Access Memory (RAM), and the Central Processing Unit (CPU). For execution of any of these commands check is any resource is required either RAM or Disk space etc the resource analyzer 124 notifies to the cache manager or disk manager to allocate and manage resource for the request execution calculating requirements based on options specified in the client request. Since the scheduler is configured to be a unified scheduler, working irrespective of the nature of the server, the resource management between concurrently logged clients (Database, Web or Mail) becomes the primary objective of the scheduler. We classify the basic resources disk, network, RAM, timer and CPU as the primary resources, which are used as per the behavioral pattern of the functional server in various combinations as desired. The various states of data and object under various operational and optional constraints of user queries, during every stage of transaction between request and response (irrespective of functionally varying clients), are mapped into various patterns and their resource usages, are managed by the resource analyzer. Various constraints are validated during every state entity of object or transaction based on the parameter settings of resource usage set by the user, state of available versus free resources, quantity of the resource and time for which the resource is available. Any translation of resource between primary or secondary is decided by the resource analyzer and the RAM is swapped or retrieved accordingly. Likewise, transition data states before, during and after being committed are managed in various disk pattern storages like rollback segment, log file and final database file. Since the scheduler kernel design is primarily asynchronous, yield based priority scheduling of any set of tasks, which requires synchronicity is enforced using the timer agent. The kernel therefore, is designed to be yield based, wherein the method of deciding the priority of the query received and scheduling server resources is based primarily on the yield of the query received which makes it possible to utilize the CPU most efficiently by relinquishing the resource once the query has been processed and executed.
The Dispatcher Agent 104 is responsible in case of sending outgoing request and buffering the request as per requirements. The Global Cache 106 is used as virtual memory available to any of the requesting operations. The primary function of this dispatcher agent 104 is to help the resource analyzer 124 synchronize the "RAM" and "Network" for every nature of response. This may vary from a server side cursor created by any database client to a frequently used web page of a web server client. Along with the resource analyzer 124, the dispatcher agent 104 optimizes the memory usage across repetitive client requests by synchronizing cached objects and various states of frequently used data. The Parser 108 is responsible for translating, analyzing, evaluating, validating any syntactic or semantic errors and parsing the client request that could either be SQL or XQL or OQL. In other words it manages to analyze and parse requests or commands irrespective of the nature of the clients, which could be either a web client or an ftp client or an ODBC compliant client or a mail client etc.
The DAT 110 is a repository for various objects like parser dictionary, message and error strings, and basic metadata definitions. The repository has also some decision matrix data which is a ratio of response time for various data structure operations like search, insert, delete across algorithms and its derivatives like trees, hash tables, linked lists with respect to loads of data v/s resource constraints. Many decisions taken by scheduler and resource analyzer 124 are based on the anticipated resultant time-ratio as dictated by the values in these DAT 110 structures and stored in Look-Up Tables (LUTs).
Before execution of any successfully parsed command the License Manager 118 checks constraints for execution of the command as dictated by the purchasing licenses. A license file decides the constraints validated by the license manager. This license file is shipped as per the purchase details like server edition, user licenses, OS etc.
In case the constraints for the flow need to monitored and audited the Audit Manager 120 tracks and logs every action and result as per the audit parameter settings for the request. This helps in tracking usage of the object and the operation irrespective of function of server that is Database Server, Web Server, Mail Server etc and enhance security.
The Command Analyzer 122 decides upon the flow of execution of the request and passes control to the respective agent as per the scope of work expected and options specified in the command. In the event of local request the command is analyzed by local command Analyzer. The DML Agent analyzes the DML command and the Server Agent analyzes the server command. We extend the concept of Operation, Object, Option and Value (OOOV) to associate a set of modules, which behaves as a callback function for every object option specified in a query. The entire architecture of various server functionalities is accomplished using a set of agents delivering a set of specific tasks as per the functionally varying servers supported. These can be loaded dynamically as per the nature of a request and need for demanded instantiation. Accordingly the command analyzer 122 decides the flow of execution and messages the appropriate agents to execute the required tasks. The granularity of state machines and its modularity mapping with options of object functionality ensures that only optimum code gets executed as per user query options and effectively utilizes CPU to a minimum. Hence unlike any program code written and executed in a linear manner, the state machine based scheduler dynamically decides program flow execution rather than the programmer incorporating it in the program code itself. The command analyzer 122 decides this flow in real time based on few heuristics analyzed by the table generated by the virtual statistics 130 and associates a series of functional event patterns to execute for various agents as needed by the query.
The Messaging 126 is a way in which one has to communicate between two of the states like either sourcing a request or sinking a request in the case of an event.
The Priority Queuing 128 is based on task according to priority that can be scheduled by the kernel. The scheduler can raise the priority so as to get maximum CPU time. There can be various levels of priority for request that are serviced by the scheduler and kept in the Priority Queue 128. A series of client requests, based on the source of request (i.e. whether HTTP/ FTP/ SMTP / POP requests) have various protocol timeouts pre-assigned. Database clients also have query and connection timeouts. Hence these timeout constraints force the scheduler to execute and update the clients about their requests on an on-going basis and work on the process for execution. The scheduler must ensure that none of these clients timeout for need of resources or server attention. Various factors dictate the positioning of the query in a priority queue 128. The criteria's based on which the scheduler arranges queries in the priority queue 128 include source request protocol, nature of query - (DDL / DML etc), complexity of query (joins, conditions etc), state of objects in the query (under transactional lock, dropped etc), resource availability (RAM, Disk etc), Burst or workload of requests within unit time, response timeout, transitional resources required before deriving final result buffer, size of response data and size of cache.
Since each of the server specific functional agents are programmed using a state machines based concept, the decision of the priority in the queue is decided at module level rather than thread level. A module is a group of atomic functions, which is scheduled by the scheduler. Any functional agent (such as DML) is a set of sequenced modules to deliver DML functionality (such as select / insert / update / delete). The scheduler tracks each query flow across various functional server agents and changes the priorities dynamically due to any unfavorable condition during execution (such as need for lock on an object). This makes queuing unique and optimizes CPU usage. Any change in flow required, to change the state entity of a query being currently executed, can be messaged and monitored across the flow between the various agents or can be aborted to relinquish certain resources.
Virtual Statistics 130 run and have to maintain global data about the number of request like number of Insert, Delete etc. Also currently how many Select etc is in memory are maintained in the virtual statistics in a virtual table. The Virtual Statistics 130 is primarily a derivative of various historical and real-time statistical analysis of operations on objects. A lot of heuristics required to predict resource pattern usage is analyzed and updated per operation on any object in the server. This process helps in mapping a pattern of resource flow across various modules of agents required to deliver various server functionalities. The heuristics are calculated based on resource specific functional agents like disk, network, timer etc. and are formulated into a mathematical model to derive functional throughput (i.e. per unit time per unit cost per unit of measurement).
The Error Handling & Notification block 132 looks after the error handling, reporting or notification.
Display Log and Statistical Log block 134 handles how the display for example an HTML (Hyper Text Markup Language) file in case of a browser are handled and also the Statistical Log is maintained. Unlike normal ODBC clients the result of any query from a web client (HTTP / FTP) is responded generally in html syntaxes. HTTP agent 136 does this translation of query data to html tagged syntaxes. Also whenever any web scripts are to be executed by the Scheduler Agent 102, the parsing and execution, Script agent in conjunction with HTTP agent manages validation and execution of web scripts like ASP/JSP etc. »
The Server Agent 142 is mainly involved for DDL / DCL operations. The server agent manages any object creation, alteration or deletion of objects. It also executes any DCL commands for statistical history or for either query or object analysis.
As the name specifies the DML Agent 144 executes only DML specific queries. As soon as a syntactically valid client request reaches the agent it validates the needs of the query objects and returns an error incase the object is invalid. It then analyses the requirements as per the response buffer expected and allocates resource required for execution. The cursor type specified by the client request and size of anticipated result after or during query execution typically dictates this. If the queries require some expression to be evaluated or special functions to be executed on operands (i.e. mathematical / logical) the DML Agent organizes the resources and executes.
As depicted in Fig 2 the flow diagram illustrates the preferred embodiment of the present invention explaining the process by which the system carries out the scheduling and integrating various server functionalities and extending this functionalities for seamless data and functionality interchange. The process begins with the Scheduler Agent 102 receiving a Client request 200, such as 'CONNECT'. The Scheduler agent 102 then proceeds to isolate the source and nature of the request 202. On isolating the source and nature of request 202, the queries received are dynamically ordered in a Priority Queue based on predetermined conditions. As and when new queries are received the Priority Queue evaluates the nature and source of those queries to rearrange the ordering of the queries in the Priority Queue. After completing the isolation of the request, the Scheduler Agent 102 then proceeds to check if the request is a Scheduler command 204. In the event that the request is not a Scheduler Command, the Scheduler generates an error code 206. In case the request is a Scheduler command, it needs to be executed beyond the scope of the scheduler functionality and the Scheduler agent 102 notifies various agents as per the request scope. Hence a web request, ODBC request or mail client request is directed to the respective agent such as database server or a web server or a mail server.
In the event that the request is meant for a Database Server 208, the Scheduler Agent 102 proceeds to analyze if it a DML command 210. In the event that it is a DML command it is given 212 to the DML Agent 144. In the event it is a non-DML command as per request parameters and options the request is given 214 either to Server Agent 142 for DDL or DCL requests. On giving the request to the Server Agent, the request is then checked for having any sub DML queries 216. In the event that the request does not have any sub DML queries, it is passed 212 to the DML Agent 144. However, if the request has any sub DML queries, the Scheduler Agent 102 forks a new thread for sub queries 218 and the sub query is passed 212 to the DML Agent 144. The DML Agent 144 then proceeds to send the request 220 to the Resource analyzer 124, which allocates the resources such as RAM etc. The Resource analyzer 124 then proceeds to check if RAM is required 222. In the event the RAM is required, the request is passed 224 to the Cache Manager 106 for processing. In the event of no RAM required, the Disk Agent 140 directly processes the request 226. After the processing done by the Cache Manager 106 and the Disk Agent 140, the request is executed 228, that is after the successful resource allocation the query is executed and the resultant buffer is prepared. The request is then checked for Response to Transmit 230. In the event of Response to Transmit, the request is passed 232 to the Network Agent 100, which transmits the result. The result is then analyzed to ascertain whether it is a successful command 234. In the event the command is not successful, an error handler is generated 236. In the event that the command is successful, a response is sent 238.
In the event of no Response to Transmit the request is checked for Response to Persist 240. In the event of Response to Persist, the request is sent 242 to the Disk Agent 140. However, if there is no
Response to Persist, the request is sent 232 to the Network Agent 100. The result is then analyzed to ascertain whether it is a successful command 234. In the event the command is not successful, an error handler is generated 236. In the event that the command is successful, a response is sent 238. The notification message data varies as per the nature of source request i.e. web clients get a html error page whereas an ODBC client gets an error code as per ODBC standard.
In the event that the request is not for a Database Server, the Scheduler agent 102 checks if it is a web- based request 246, meant for a Web Server. If the request is for a Web Server, the Scheduler checks whether it is an http request 248. In case the request is an http request, it is given 250 to the Http Agent 136. Next the Http Agent 136 passes on this request for translation of the OQL to DML or DDL 252. In the event of non-http request, the Scheduler Agent 102 classifies it as an ftp request 254. In the event it is not an ftp request, it is classified as a an unknown client request 258. However, if it is an ftp request, the ftp request is then given to for translation from OQL to DML or DDL 252. Also due to this approach of OQL to SQL translation every ftp command translates and maps to the database maintained virtual file system. After this translation to DDL or DML respectively the process to check for DML command and onwards is carried out.
On checking if the request if for a Web Server 246, in the event that it is not, it is checked to ascertain if it is a request for the Mail Server 256. If the request if not for a Mail Server, it is classified as an unknown client request 258. The request is classified as a SMTP/POP request 260 and it is then given for translation from OQL to DML or DDL 252.
As the object of invention to seamlessly support data migration between server objects irrespective of functionality an extra step of translation of http / ftp / smtp / pop requests to DML SQL queries (OQL to SQL) translation is performed internally and the request is checked for DML Command and handed to the DML agent to execute. For example any web site data when created is archived and retrieved as BLOB data and entire directory or file structure is stimulated and managed through a virtual file system Handler. FIG. 3 illustrates a screenshot of an embodiment of the present invention. The Scheduler Agent 305 is is depicted with the functional servers and resources listed in the column on the left. The functional entities like database server, web server etc., apart from standard network or disk usage pattern also mapped CPU usage pattern in the function specific agents like DML agent, HTTP agent etc., using only CPU as a resource to deliver server specific desired functionalities. This can be perceived in a GUI representation as shown in Fig. 3. The figure depicts the multifunctional server as a composition of set of resource agents and the CPU is decomposed into functional servers such as Database, Web, Mail etc These derived server are further set of agents required to deliver server specific functionality. Also shown in the figure are common agents such as Dispatcher and Scheduler Agent 305. Hence the GUI representation shown the functional derived of an agent as set of modules executed under a given event under a given state.

Claims

What is claimed is:
1. A computing system for scheduling instructions based on availability and utilization of resources across disparate functional servers, comprising: a command analyzer to receive said instructions and determine the optimum sequence of execution of said instructions received; a resource analyzer to allocate said resources optimally for execution; and a memory store to maintain historical and progressive data of said resource utilization to predict said allocation of resources whereby said computing system optimizes scheduling of instructions based on results received from said command analyzer and said resource analyzer in combination with said memory store.
2. The computing system as recited in claim 1 utilizes a means for prioritizing said instructions received in a queue, based on source and nature of said instructions received and availability of said resources.
3. The computing system as recited in claim 2 wherein said queue is capable of ordering said instructions using a yield based kernel.
4. The computing system as recited in claim 2 decomposes said instruction into said instruction functionality and objects to predict resources required for performing said functionality based on said objects.
5. The computing system as recited in claim 1 is implemented using a finite state machine model wherein the application is capable of deciding its own functional flow based on the instructions received rather than the user or the programmer by executing relevant functions dynamically as dictated by said kernel without having to traverse through unnecessary validation code.
6. The computing system as recited in claim 1 wherein each instruction is decomposed into a single thread and said single thread is decomposed further into micro-threads for executing said instructions.
7. The computing system as recited in claim 6 wherein said micro-threads are utilized for determining the scheduling of said resources by a yield based kernel.
8. The computing system as recited in claim 1 further comprises of:
a network means to create threads as per the nature of the said instructions received and protocol in which said instructions are encapsulated;
a timer means to synchronize and monitor said resource usage of current and other processes running on said server.
a dispatcher means to optimize memory and RAM usage across repetitive client requests by synchronizing cached objects and states of said data frequently used.
9. The computing system as recited in claim 8 wherein said timer agent is configured to execute triggers to perform periodic checks on the number of unfinished committed transactions and size of said uncommitted data.
10. A method for scheduling instructions in a computing system based on availability and utilization of resources across disparate functional servers, comprising:
analyzing said instructions received and determining the optimum sequence of execution of said instructions received; analyzing availability of said resources for allocating said resources optimally for execution of said instructions; and maintaining historical and progressive data of said resource utilization in a memory store to predict said allocation of resources whereby said computing system optimizes scheduling of instructions based on results received from said command analyzer and said resource analyzer in combination with said memory store..
11. The method as recited in claim 10 wherein the utilizes a means for prioritizing said instructions received in a queue, based on source and nature of said instructions received and availability of said resources.
12. The method as recited in claim 11 wherein said queue is capable of ordering said instructions using a yield based kernel
13. The method as recited in claim 11 wherein said instructions are decomposed into said instruction functionality and objects to predict resources required for performing said functionality based on said objects.
14. The method as recited in claim 10 wherein said computing system is implemented using a finite state machine model wherein the application is capable of deciding its own functional flow based on the instructions received rather than the user or the programmer by executing relevant functions dynamically as dictated by said kernel without having to traverse through unnecessary validation code.
15. The method as recited in claim 10 wherein each instruction is decomposed into a single thread and said single thread is decomposed further into micro-threads for executing said instructions.
16. The method as recited in claim 15 wherein said micro-threads are utilized for determining the scheduling of said resources by a yield based kernel.
17. The method as recited in claim 10 further comprising the steps of:
creating threads as per the nature of the said instructions received and protocol in which said instructions are encapsulated;
synchronizing and monitoring said resource usage of current and other processes running on said server.
optimizing memory and RAM usage across repetitive client requests by synchronizing cached objects and states of said data frequently used.
18. The method as recited in claim 17 wherein periodic triggers are configured to check the number of unfinished committed transactions and size of said uncommitted data.
PCT/IN2004/000025 2003-01-30 2004-01-29 System and method for scheduling server functions irrespective of server functionality WO2004077214A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN120MU2003 2003-01-30
IN120/MUM/2003 2003-01-30

Publications (2)

Publication Number Publication Date
WO2004077214A2 true WO2004077214A2 (en) 2004-09-10
WO2004077214A3 WO2004077214A3 (en) 2005-05-26

Family

ID=32922934

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2004/000025 WO2004077214A2 (en) 2003-01-30 2004-01-29 System and method for scheduling server functions irrespective of server functionality

Country Status (1)

Country Link
WO (1) WO2004077214A2 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9213612B2 (en) 2008-09-29 2015-12-15 Cisco Technology, Inc. Method and system for a storage area network
US10140172B2 (en) 2016-05-18 2018-11-27 Cisco Technology, Inc. Network-aware storage repairs
US10222986B2 (en) 2015-05-15 2019-03-05 Cisco Technology, Inc. Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system
US10243826B2 (en) 2015-01-10 2019-03-26 Cisco Technology, Inc. Diagnosis and throughput measurement of fibre channel ports in a storage area network environment
US10243823B1 (en) 2017-02-24 2019-03-26 Cisco Technology, Inc. Techniques for using frame deep loopback capabilities for extended link diagnostics in fibre channel storage area networks
US10254991B2 (en) 2017-03-06 2019-04-09 Cisco Technology, Inc. Storage area network based extended I/O metrics computation for deep insight into application performance
US10303534B2 (en) 2017-07-20 2019-05-28 Cisco Technology, Inc. System and method for self-healing of application centric infrastructure fabric memory
US10404596B2 (en) 2017-10-03 2019-09-03 Cisco Technology, Inc. Dynamic route profile storage in a hardware trie routing table
US10545914B2 (en) 2017-01-17 2020-01-28 Cisco Technology, Inc. Distributed object storage
US10585830B2 (en) 2015-12-10 2020-03-10 Cisco Technology, Inc. Policy-driven storage in a microserver computing environment
US10664169B2 (en) 2016-06-24 2020-05-26 Cisco Technology, Inc. Performance of object storage system by reconfiguring storage devices based on latency that includes identifying a number of fragments that has a particular storage device as its primary storage device and another number of fragments that has said particular storage device as its replica storage device
US10778765B2 (en) 2015-07-15 2020-09-15 Cisco Technology, Inc. Bid/ask protocol in scale-out NVMe storage
US10826829B2 (en) 2015-03-26 2020-11-03 Cisco Technology, Inc. Scalable handling of BGP route information in VXLAN with EVPN control plane
US10872056B2 (en) 2016-06-06 2020-12-22 Cisco Technology, Inc. Remote memory access using memory mapped addressing among multiple compute nodes
US10942666B2 (en) 2017-10-13 2021-03-09 Cisco Technology, Inc. Using network device replication in distributed storage clusters
US11563695B2 (en) 2016-08-29 2023-01-24 Cisco Technology, Inc. Queue protection using a shared global memory reserve
US11588783B2 (en) 2015-06-10 2023-02-21 Cisco Technology, Inc. Techniques for implementing IPV6-based distributed storage space

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6108770A (en) * 1998-06-24 2000-08-22 Digital Equipment Corporation Method and apparatus for predicting memory dependence using store sets
US6305014B1 (en) * 1998-06-18 2001-10-16 International Business Machines Corporation Lifetime-sensitive instruction scheduling mechanism and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6305014B1 (en) * 1998-06-18 2001-10-16 International Business Machines Corporation Lifetime-sensitive instruction scheduling mechanism and method
US6108770A (en) * 1998-06-24 2000-08-22 Digital Equipment Corporation Method and apparatus for predicting memory dependence using store sets

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9213612B2 (en) 2008-09-29 2015-12-15 Cisco Technology, Inc. Method and system for a storage area network
US10243826B2 (en) 2015-01-10 2019-03-26 Cisco Technology, Inc. Diagnosis and throughput measurement of fibre channel ports in a storage area network environment
US10826829B2 (en) 2015-03-26 2020-11-03 Cisco Technology, Inc. Scalable handling of BGP route information in VXLAN with EVPN control plane
US10222986B2 (en) 2015-05-15 2019-03-05 Cisco Technology, Inc. Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system
US11354039B2 (en) 2015-05-15 2022-06-07 Cisco Technology, Inc. Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system
US10671289B2 (en) 2015-05-15 2020-06-02 Cisco Technology, Inc. Tenant-level sharding of disks with tenant-specific storage modules to enable policies per tenant in a distributed storage system
US11588783B2 (en) 2015-06-10 2023-02-21 Cisco Technology, Inc. Techniques for implementing IPV6-based distributed storage space
US10778765B2 (en) 2015-07-15 2020-09-15 Cisco Technology, Inc. Bid/ask protocol in scale-out NVMe storage
US10949370B2 (en) 2015-12-10 2021-03-16 Cisco Technology, Inc. Policy-driven storage in a microserver computing environment
US10585830B2 (en) 2015-12-10 2020-03-10 Cisco Technology, Inc. Policy-driven storage in a microserver computing environment
US10140172B2 (en) 2016-05-18 2018-11-27 Cisco Technology, Inc. Network-aware storage repairs
US10872056B2 (en) 2016-06-06 2020-12-22 Cisco Technology, Inc. Remote memory access using memory mapped addressing among multiple compute nodes
US10664169B2 (en) 2016-06-24 2020-05-26 Cisco Technology, Inc. Performance of object storage system by reconfiguring storage devices based on latency that includes identifying a number of fragments that has a particular storage device as its primary storage device and another number of fragments that has said particular storage device as its replica storage device
US11563695B2 (en) 2016-08-29 2023-01-24 Cisco Technology, Inc. Queue protection using a shared global memory reserve
US10545914B2 (en) 2017-01-17 2020-01-28 Cisco Technology, Inc. Distributed object storage
US11252067B2 (en) 2017-02-24 2022-02-15 Cisco Technology, Inc. Techniques for using frame deep loopback capabilities for extended link diagnostics in fibre channel storage area networks
US10243823B1 (en) 2017-02-24 2019-03-26 Cisco Technology, Inc. Techniques for using frame deep loopback capabilities for extended link diagnostics in fibre channel storage area networks
US10254991B2 (en) 2017-03-06 2019-04-09 Cisco Technology, Inc. Storage area network based extended I/O metrics computation for deep insight into application performance
US10303534B2 (en) 2017-07-20 2019-05-28 Cisco Technology, Inc. System and method for self-healing of application centric infrastructure fabric memory
US11055159B2 (en) 2017-07-20 2021-07-06 Cisco Technology, Inc. System and method for self-healing of application centric infrastructure fabric memory
US10999199B2 (en) 2017-10-03 2021-05-04 Cisco Technology, Inc. Dynamic route profile storage in a hardware trie routing table
US11570105B2 (en) 2017-10-03 2023-01-31 Cisco Technology, Inc. Dynamic route profile storage in a hardware trie routing table
US10404596B2 (en) 2017-10-03 2019-09-03 Cisco Technology, Inc. Dynamic route profile storage in a hardware trie routing table
US10942666B2 (en) 2017-10-13 2021-03-09 Cisco Technology, Inc. Using network device replication in distributed storage clusters

Also Published As

Publication number Publication date
WO2004077214A3 (en) 2005-05-26

Similar Documents

Publication Publication Date Title
US7421440B2 (en) Method and system for importing data
Buchmann et al. Time-critical database scheduling: A framework for integrating real-time scheduling and concurrency control
US11630832B2 (en) Dynamic admission control for database requests
US6741982B2 (en) System and method for retrieving data from a database system
WO2004077214A2 (en) System and method for scheduling server functions irrespective of server functionality
US9378337B2 (en) Data item deletion in a database system
US7370326B2 (en) Prerequisite-based scheduler
EP1788486B1 (en) Cooperative scheduling using coroutines and threads
WO2018052907A1 (en) Data serialization in a distributed event processing system
US7747585B2 (en) Parallel uncompression of a partially compressed database table determines a count of uncompression tasks that satisfies the query
CN109840144B (en) Information service scheduling method and system for cross-mechanism batch service request
US20170139745A1 (en) Scaling priority queue for task scheduling
US11620310B1 (en) Cross-organization and cross-cloud automated data pipelines
Saxena et al. Auto-WLM: Machine learning enhanced workload management in Amazon Redshift
US20140040191A1 (en) Inventorying and copying file system folders and files
Campbell Service oriented database architecture: App server-lite?
Sung et al. A component-based product data management system
CN111193774A (en) Method and system for improving throughput of server system and server system
Sutherland et al. Cooperative Concurrency Control for Write-Intensive Key-Value Workloads
US7721287B2 (en) Organizing transmission of repository data
US12079103B2 (en) Performance test environment for APIs
Canon et al. Hector: A Framework to Design and Evaluate Scheduling Strategies in Persistent Key-Value Stores
JP2005107824A (en) Eai server, and program for eai server
Quintero et al. IBM Technical Computing Clouds
Froidevaux et al. The mainframe as a high-available, highly scalable CORBA platform

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase