WO2004077217A2 - System and method of object query analysis, optimization and execution irrespective of server functionality - Google Patents
System and method of object query analysis, optimization and execution irrespective of server functionality Download PDFInfo
- Publication number
- WO2004077217A2 WO2004077217A2 PCT/IN2004/000028 IN2004000028W WO2004077217A2 WO 2004077217 A2 WO2004077217 A2 WO 2004077217A2 IN 2004000028 W IN2004000028 W IN 2004000028W WO 2004077217 A2 WO2004077217 A2 WO 2004077217A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- query
- execution
- objects
- plan
- optimization
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/289—Object oriented databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24542—Plan optimisation
Definitions
- the present invention provides a software-implemented process, system, and method for use in a computing environment.
- the present invention deploys a multithreaded agent technology, which executes any DML queries across database objects and vendor independent functionalities.
- the present invention has an assumption that a Parser parses SQL queries independent of vendor syntaxes and creates an object interface, which is specific to query specific objects, options and parameters specified.
- the Parser uses the Dynamic Dictionary for parsing such queries.
- the Object Interface consists of the Interfaces available to use such objects.
- the syntactically correct parsed query may have a lot of unresolved strings (identifiers), which as a part of DML.
- the post processing on such unresolved strings or identifiers resolves into metadata objects, sub objects, expressions, SQL functions etc.
- intelligent caching of identifiers related to metadata objects is updated in the Parser Data Dynamic Dictionary so that any further valid metadata identifier is resolved as an object associated with a valid schema.
- a logical plan is prepared before the execution of the query. Certain optimizations are possible based on this analysis. For example, some other concurrent clients may just have executed the query currently waiting to be executed and the state of the object may not have changed. Hence the cached resultant data buffer may be routed to the requesting client rather than executing the query again.
- the Query Planner does the plan generating in two forms, Logical Query Plan and Physical Query Plan.
- the Query Optimization is the process of generating Plans for the query execution and choosing the best plan among them.
- the optimization phase firstly includes an estimation of the query execution cost with respect to the resource utilization.
- the query execution cost is estimated by choosing the factors such as, data block read/write, memory requirement, etc.
- a query plan Left Deep Tree is prepared based on the relation cost. This Left Deep Tree plan is used in the execution of the query.
- Queries which have Sub Queries, are merged in to the parent query. This is excluded in case of correlated queries.
- the execution part involves processing the query plan tree generated in the optimization stage. The query is given to the execution phase and the following steps are done on this query during execution time
- a valid query undergoes analysis for an execution plan preparation, which maybe optimized if required. After execution based on the cursor parameters specified by the client the dispatching of result is staggered to optimize resources and promote concurrency.
- the entire design is based on state machines and modules comprising of various events communication via messages that is it is event driven using Finite State Machine (FSM) concept, the functionality is broken down into a series of events scheduled by kernel.
- FSM Finite State Machine
- Fig. 1 is a block diagram illustrating the manner in which the preferred embodiment of the present invention carries out query analysis and optimization of resources
- Fig. 2 is a flow diagram depicting the stages of the query Execution of the preferred embodiment of the present invention.
- Fig. 3 is a logical flow diagram depicting the various stages of query planning.
- the figure 4 is a screenshot depicting the query execution in the post parsing scenario where the identifiers gets converted to object, sub-objects.
- Fig. 5 is a screenshot depicting the state machine statistics for the various agents post execution and their associated timings of resource utilization according to the different modules.
- Fig. 6 is a screenshot depicting the state machine statistics for the various agents post execution and their associated timings of resource utilization graphically for the various modules.
- FIG. 1 there is shown a block diagram illustrating the manner in which the preferred embodiment of the present invention carries out query analysis and optimization of resources.
- the client queries could be from either an ODBC / OLEDB client 100, a Web Client 102 such as MS
- queries can be any one of the queries such as an OQL query 106.
- the Parser 108 comprises of an Object Parser 110, Object Interface 112 and a Dynamic Dictionary 114.
- the Parser 108 breaks each incoming query into 'Object(s)', 'Qperation(s)', 'Option(s)' and 'Value(s)'.
- the Object Parser 110 uses the Dynamic Dictionary 114 for parsing such queries.
- the Dynamic Dictionary 114 for parsing such queries.
- the Dictionary 114 contains a collection of the most frequently used objects, operations, options and values.
- the Object Interface 112 consists of the Interface available to use such objects.
- the command analyzer 116 check the parsed object interface to relate operation(s) specified in the query and the various identifiers, which are yet to be validated as objects or sub objects.
- the validation of these identifiers in the object interface and with the database metadata using the requisite schema parameters transforms and asserts these identifiers as objects or sub objects.
- All information related to object or sub-object type is updated in the object interface tree. This information isolates the identifiers into objects and values (user options) from the user query. Also all dependencies and relationships related to these objects are updated so that the constraints, rules, defaults or triggers can be executed if defined for the objects. Many SQL statements may have more than one operation to be performed.
- the sequence of execution, inter-dependencies of execution parameters (co-related) is analyzed by the Command Analyzer 116 and an execution priority plan is prepared for each separate query / sub-query to be executed.
- the command analyzer 116 uses the Global Cache 142 for the temporary storage. For each independent query a separate execution parent or child thread is forked and the Command Analyzer 116 coordinates synchronization of the final output data between these threads.
- the Interface / Identifier Analyzer 118 validates operations to be performed on the objects with the interface options and values.
- the interface / identifier analyzer 118 uses the Global Cache 142 for the temporary storage. All post parsing unresolved identifiers, which have to be validated as 'object(s), 'sub object(s)', reserved characters, expressions etc is carried out by the Interface / Identifier Analyzer 118. Any unresolved identifier, which fails to be classified as valid literal, object, expression etc, triggers an invalid operation, operand or value and further the query processing is aborted.
- the vendor independent object analyzer 120 checks for any specific functionality expected on the object from the object interface 112 which may be syntactically same but functionally dissimilar across vendor implementation and according changes and links the associated modules (in the Object interface tree) to deliver the query specific functionality.
- the Vendor Independent Object Analyzer 120 has a Repository, which maps functionality and features of the object with respect to their addressing notations.
- the Object Interface 112 created by the Parser 108 is imparted functionality as per this interface implementation.
- the Parser 108 resolves the addressing of notational issues by mapping of these dissimilar notations into similar functionalities for which the implementation execution is switched or dictated by the vendor independent object analyzer.
- the vendor independent object analyzer 120 tags the objects and sub objects in the interface translator with attribute elements so that the callback function modules changes its behavior and vendor independence is achieved. These are mainly related to data type transforms or presentation masks, upper or lower limits of scales and precision, regional settings and other Unicode dependent data requirements. Often features or functionality supported for an object supported by one vendor is missing in another or many cases conflicting functionalities with respect to the same addressing convention may exist. Such issues are resolved as per the client vendor compatibility resolution expected that is the client ODBC connectivity has options which can dictate vendor standard adherence which according will be responded.
- the Query Planner 128 does the plan generation in two forms, Logical Query Plan 130 and Physical Query Plan 132. Choosing the best execution path among the different paths and applying the physical factors generate the Physical Query Plan.
- Logical plan the query is rewritten and different paths are generated based on the factors such as filters, joins, indexes etc.
- the logical planning may require optimization or rewriting a query to minimize resources or time of execution. Often based on the state of the object during the execution of a concurrent transaction on the same object involved in the current query the physical plan may sequence or override the flow of the query planner during execution.
- the logical or arithmetic optimizations, which decide the execution flow is dictated by a set of rules embedded in the Repository.
- the query planner 128 analyzes the sequence of arranged functions and deduces any logical or physical alternative methods of execution state based on set of rules or current state of objects involved in the operation.
- the query planner uses the semantic validator 122, expression validator 124, query optimizer 134 and the query transformer 136.
- the object interface of a successfully syntactically parsed query will have unidentified words which are identifiers or values which have to be validated against user metadata.
- the isolation of these identifiers into user definable objects or vendor related reserved words have to be clearly associated into objects or functions and their functionalities, arguments and data types have to be judged against these identifiers.
- Also may functional entities have return values different across vendors, which actually dictate the speed and convention by which these calculations are managed.
- the semantic validator 122 serves as a primary post-parsing hub where beyond syntactical compliance, semantic compatibility with respect to the objects in the database are checked and verified, across functional servers. For example the Size of any object cannot be negative or certain values may not accept floating points etc.
- the Semantic Validator 122 thus resolves this functionality as per object entity independent of features supported by vendors.
- the unique part of the expression solver 124 is the handling of syntactical and solving issues of expressions across functionally different servers, resolving dependencies and returning data in specific data types or variant arrays. Even most complex procedural language expressions consisting of user- defined functions can be used and resolved in the expression solver 124.
- Query Optimizer 134 optimizes a query is required only if the query is badly written or the conditions and joins can be rearranged to reduce the same. Many a times developers skip checking query optimization due to intermittent changes in application design or the conditions specified in the query could have been evaluated using another column of the same object which was indexed and could give unique combinations or lesser intermediate resultset.
- the Query Optimization 134 is the process of generating plans for the query execution and choosing the best plan among them.
- the Query Optimizer 134 carries out optimization in two stages; namely, arithmetic or logical optimization (purely based on the conditions in the query) and physical execution optimization as per state of object, concurrent operations and cache.
- Query Transformer 136 transforms a query if required when the expected data and persisted data are dissimilar and have to undergo change for presentation logic. Since the QQL engine can expect query from any client (web, mail, database) the response buffer demanded can be created in a format dictated by the client rather than the standard ODBC format. This module tunes the transformation of the query to suit the response demanded.
- the Query Transformer 136 does the transformation of the query as per the optimized query plan taking into consideration both the Logical Plan 130 and Physical Plan 132 as prepared by the Query Planner 128 and optimized by the Query Optimizer 134.
- the Query Execution Engine 138 carries out the query execution according to the optimized plan that is both the Logical Plan 130 and the Physical Plan 132 as prepared by the Query Optimizer 134.
- the Query Execution Engine 138 takes the query from the Query Transformer and executes the Queries using the Index Agent 144.
- the sequence and number of modules executed to derive the result varies from query to query hence unlike linear programming (since current invention is state machine driven) the execution of code for each query varies and hence delivers better performance because lots of unnecessary validations and checks are never executed.
- the Cursor and Cache Analyzer 140 is a sort of resource balancer between the sourcing of resultant buffer by the query execution engine 138 and sinking the buffer via network to the requesting client.
- This Cursor and Cache Analyzer 140 also uses the Global Cache 142 for the data transportation.
- the nature of the request dictates the resources utilization i.e. typically a forward only cursor does not demand data scroll ability hence the same buffer can be reused to create newer data buffers without allocating new memory blocks whereas dynamic or keyset type of cursor demands lots of resources and may require swapping to the secondary storage.
- the function of the Index Agent 144 is to help the Query Execution Engine 138 in queries execution of elements used in conditions, which are indexed.
- the constraints of the conditions specified in query and attributes defined retrieve data fastest irrespective of volume of data or type of data.
- the cached resultant data in Output Buffer as per the OQL executed query 146 may be routed to the requesting client rather than executing the query again.
- the output may have to be parsed and rendered to generate HTML if demanded with a stylesheet.
- the module uses the parser, which can parse SAX or DOM 145 defines stylesheets or DTD to create a HTML buffer as demanded by the source client from which OQL request was received.
- This resultant data can be routed either via the Dispatcher Agent 148 which acts like a temporary buffer or the Http Agent 150 in case of an http request.
- Fig 2 is a flow diagram illustrating the stages of the query Execution in the preferred embodiment of the present invention.
- the parser gives the syntactically correct parsed query as an object interface 300 to this invention.
- the proposed invention perceives any query in object oriented paradigm and decomposes a query into logical objects and associates a set of functional modules to deliver these functionalities. This information isolates the identifiers into objects and values (user options) from the user query. Also all dependencies and relationships related to these objects are updated so that the constraints, rules, defaults or triggers can be executed if defined for the objects.
- the parsed user query is an object interface, this object interface is used for the isolation of the identifiers from the object interface and the objects, sub-objects and values for operands are derived 302.
- the source of the request for the operation is analyzed 304. Further the objects from the object interface are analyzed and the operations and objects are validated 306. In the event of invalid operation an error is notified 308, the validation of these identifiers in the object interface with the database metadata using requisite schema parameters transforms and assets these identifiers as objects or sub objects and all the relevant information related to object or sub-object type is updated in the object interface tree. In the event the objects are successfully isolated and analyzed 306 then the options associated with the operation are isolated, values linked and derive interdependencies, if any 310.
- the vendor independent object analyzer isolates the options/ values and related call back functions or modules change its behavior such that vendor independence is achieved 316. Further the order or sequence of modification/ function execution in the order of least dependencies is identified 318.
- the query planning is done by analyzing this sequence of arranged functions and reduces any logical or physical alternative methods of execution based on set of rules or current state of objects involved in operation 330.
- the semantic validation is carried out to check whether the query has expressions and handling of these syntactical and solving issues across functionally different servers including resolving the dependencies and returning the data in specific data types or variant arrays.
- the expression is checked for co-relation or dependencies 330. If there exists no pending dependencies or co-relation in the expression query, then the expression is resolved and result is derived 332. In the case the expression has pending co-relation or dependencies 330, then the resources are allocated and the object interface stack is updated with the results 334. In the event the query has no expression 322, then checks whether the query has SQL or datatype specific transformation/ translation functions 324.
- the query has SQL or datatype specific transformation /translation functions 324, then checks whether the query has co-relation or dependencies 326. If the query has no further dependencies or co-relation 326 then proceed to allocate the resources and the update object interface stacks with results 334. In the event no dependencies or co-relations 326 are pending then resolve or execute functions and derive the results 328.
- the query transformation is required when the expected data and the persisted data are dissimilar and have to undergo change for presentation logic. After the query resolution, execution function and deriving upon the results 328, the query transformation such as simplifying the interdependencies or relation is possible through any existing arithmetic and logical rules 336. After successful query execution, the system checks whether the rewrite is necessary 338.
- FIG. 3 is a logical flow diagram depicting the various stages of query planning.
- a syntactically correct parsed query when successfully emerges after the semantic validation is ready for execution 200.
- the resolution of each query has to be semantically validated by verifying each identifier in the query against object or sub-object(s) definition in query and their option values.
- the query is examined for the Validation check 202.
- the query is checked for the semantics 204.
- the Vendor Independent Object Analyzer 120 translates and binds the operation, option and values with the object functionality, interpreting various vendor syntaxes.
- the Query Optimization is planned firstly as logical plan 130 and next as a Physical plan 132.
- the optimization phase firstly includes the query execution cost with respect to the resource utilization is estimated 208.
- the query execution cost is estimated by choosing the factors such as, data block read/write, memory requirement, etc.
- a query plan Left Deep Tree is prepared based on the relation cost 210. This Left Deep Tree plan is used in the execution of the query.
- the query is optimized by Pushing Down the projection list in the tree 212. Further, after the Push Down Projections, the query is optimized by Pushing Down the conditions, expressions and SQL Functions in the tree 214. Then, the query optimization proceeds to Joins, Indexes are identified 216 and path is changed accordingly. Queries, which are having Sub Queries, are merged 218 in to the parent query in case of correlated queries.
- the Physical plan 132 involves identifying the optimized plan tree 220. After the Plan tree 220 is identified, the appropriate algorithm is applied 222.
- the algorithms used to process the query are Nested Loops 224, Simple Projection 226, Filter Projection 228, Merge Sort Joins 230, One Pass Sort Based 232 and Multi Pass Sort Based 234 algorithm.
- the execution 238 part involves processing the query plan tree generated in the optimization stage. In this stage the relation's actual blocks are read either from disk or cache and different algorithms are used to process the nodes.
- the algorithms used to process the query are Nested Loops 224, Simple Projection 226, Filter Projection 228, , Merge Sort Joins 230, One Pass Sort Based 232, Multi Pass Sort Based 234.
- a node from plan tree is obtained and the node is checked for Cartesian or Relation type. According to the Plan Tree, an appropriate algorithm is selected.
- Inner loop relation resides in memory and the outer loop relation is read block wise. As the relation is processed, intermediate blocks are prepared and they are committed to the disk. These steps are continued until all nodes in the tree are processed.
- the result preparation phase involves the preparing a meaningful buffer from the blocks generated by the query execution phase. Everything is cleaned up at this phase and resources relinquished based on the cursor and request type.
- the figure 4 is a screenshot depicting the query execution in the post-parsing scenario where the identifiers gets converted to object, sub-objects.
- the proposed invention perceives any query in object oriented paradigm and decomposes a query into logical objects and associates a set of functional modules to deliver these functionalities.
- the DML operations have been widely categorized into set of operations, objects, options and values.
- the query execution is carried out on operations such as the database commands namely SELECT 400, INSERT 405, UPDATE 410, DELETE 415 etc.
- the objects can be tables, views etc as mentioned in the queries and options can be conditions, expressions, SQL functions, which work on specific operands such as constants or data type columns. These are arranged and rearranged so as to optimize the sequence of execution of related modules using optimum resources in optimum time. Since the entire implementation of all complex data manipulation functionalities is implemented in state machines with each module functionally independent and strictly adhering to the Atomicity, Consistency, Isolation and Durability (ACID) properties, the sequence of calling convention or module chaining, is decided by the query planner and optimizer. This gives the best and minimum code to be iteratively executed to deliver the expected functionality from the user query.
- ACID Atomicity, Consistency, Isolation and Durability
- Figure 5 lists the modules of the specific agent, total agent-wise resources timings displayed.
- the figure 6 plots the timing of each module graphically and can be used to achieve balance or uniformity in resource usage so as to make the response time reasonably predictive and accurate.
Landscapes
- Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Operations Research (AREA)
- Computational Linguistics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present invention relates generally to the field of OQL query analysis, optimization and execution independent of features and syntactical variations supported by any Database Management System (DBMS) vendors. More particularly the present invention relates generally to a methodology of syntactically varying vendor specific query execution, balancing resources and heterogeneous clients and permitting functionality interchange across similar database objects but varying implementation methodology without reprogramming effort.
Description
Ύ LΈ OF INVENTION
SYSTEM AND METHOD OF OBJECT QUERY ANALYSIS, OPTIMIZATION AND EXECUTION IRRESPECTIVE OF SERVER FUNCTIONALITY
BACKGROUND OF THE INVENTION Most of the developers of business process automation applications face a Herculean task of maintaining application design across database vendors and versions of databases. Any such software manufacturer who wants applications to be ported across database vendors, typically have their own versioning issues across database vendors and product versions because of the following problems such as database availability on various Operating Systems (OS), resource requirements and costing factors across OS, ease of implementation because of development and debugging tools, ease of deployment, maintenance and cost of maintenance with respect to manpower skill sets requirement and time, features supported by the vendor, which enhances application development, deployment, maintenance and performance across issues like concurrency, database size, nature or client request queries etc, and product support availability and relation with the product vendor.
The general standard process adopted by these application developers to permit database portability is to adhere to common features or syntaxes supported by all required database vendors. This is possible for small applications but in many cases this leads to serious design compromise and performance degradation without the end users knowledge and his investment in such automation solutions does not guarantee an effective yield. Many application vendors then push the responsibility of unnecessary maintenance to the Database Administrator (DBA) as part of the automation process. Also none of the application designers are guaranteed Data Definition Language (DDL) portability, which always is a maintenance issue of developers across database vendors and intelligent SQL syntax switching as per database backend is managed as part of the development cycle.
All these temporary solutions still lead to disparate systems and these database specific automation solutions create islands of non-portable data and incompatible business process automation. Hence
additional tools like middleware or Enterprise Application Integration (EAI) solutions are required to bridge this automation disparity. Also in any process of automation, the business process requires varying degrees of features of databases at various levels of implementation cycle as per volume of data, business logic, resource required, technical skill set available and functional expertise required. Hence any enterprise wide automation can never guarantee optimum utilization of resource of hardware and software.
Hence, there exists a need to remove the above-mentioned disparities. There existed a further need for an application, which could guarantee a very high degree of optimum database utilization without any application design compromise and bridging of database functionalities. Further the promotion of data interchange between applications written for specific database vendors was needed.
SUMMARY OF THE INVENTION
To meet the foregoing needs, the present invention provides a software-implemented process, system, and method for use in a computing environment. The present invention deploys a multithreaded agent technology, which executes any DML queries across database objects and vendor independent functionalities. The present invention has an assumption that a Parser parses SQL queries independent of vendor syntaxes and creates an object interface, which is specific to query specific objects, options and parameters specified.
The Parser uses the Dynamic Dictionary for parsing such queries. The Object Interface consists of the Interfaces available to use such objects. As soon as the DML Agent receives the Object Interface that is a tokenized query tree the post-parsing process begins. The syntactically correct parsed query may have a lot of unresolved strings (identifiers), which as a part of DML. The post processing on such unresolved strings or identifiers resolves into metadata objects, sub objects, expressions, SQL functions etc. For enhancing DML performance and reducing validation monotony, intelligent caching of identifiers related to metadata objects is updated in the Parser Data Dynamic Dictionary so that any further valid metadata identifier is resolved as an object associated with a valid schema.
As per the analysis of these prerequisites, a logical plan is prepared before the execution of the query. Certain optimizations are possible based on this analysis. For example, some other concurrent clients may just have executed the query currently waiting to be executed and the state of the object may not have changed. Hence the cached resultant data buffer may be routed to the requesting client rather than executing the query again.
The Query Planner does the plan generating in two forms, Logical Query Plan and Physical Query Plan. The Query Optimization is the process of generating Plans for the query execution and choosing the best plan among them.
The optimization phase firstly includes an estimation of the query execution cost with respect to the resource utilization. The query execution cost is estimated by choosing the factors such as, data block read/write, memory requirement, etc. After this process of query cost estimation, a query plan Left Deep Tree is prepared based on the relation cost. This Left Deep Tree plan is used in the execution of the query.
Queries, which have Sub Queries, are merged in to the parent query. This is excluded in case of correlated queries. The execution part involves processing the query plan tree generated in the optimization stage. The query is given to the execution phase and the following steps are done on this query during execution time
A valid query undergoes analysis for an execution plan preparation, which maybe optimized if required. After execution based on the cursor parameters specified by the client the dispatching of result is staggered to optimize resources and promote concurrency.
The entire design is based on state machines and modules comprising of various events communication via messages that is it is event driven using Finite State Machine (FSM) concept, the functionality is broken down into a series of events scheduled by kernel.
BRIEF DESCRIPTION OF THE DRAWINGS The various objects and advantages of the present invention will become apparent to those of ordinary skill in the relevant art after reviewing the following detailed description and accompanying drawings, wherein:
Fig. 1 is a block diagram illustrating the manner in which the preferred embodiment of the present invention carries out query analysis and optimization of resources
Fig. 2 is a flow diagram depicting the stages of the query Execution of the preferred embodiment of the present invention.
Fig. 3 is a logical flow diagram depicting the various stages of query planning.
The figure 4 is a screenshot depicting the query execution in the post parsing scenario where the identifiers gets converted to object, sub-objects.
Fig. 5 is a screenshot depicting the state machine statistics for the various agents post execution and their associated timings of resource utilization according to the different modules.
Fig. 6 is a screenshot depicting the state machine statistics for the various agents post execution and their associated timings of resource utilization graphically for the various modules.
DETAILED DESCRIPTION OF THE INVENTION
While the present invention is susceptible to embodiment in various forms, there is shown in the drawings and will hereinafter be described a presently preferred embodiment with the understanding that the present disclosure is to be considered an exemplification of the invention and is not intended to limit the invention to the specific embodiment illustrated.
In the present disclosure, the words "a" or "an" are to be taken to include both the singular and the plural. Conversely, any reference to plural items shall, where appropriate, include the singular.
Referring now to the drawing particularly in Fig 1 , there is shown a block diagram illustrating the manner in which the preferred embodiment of the present invention carries out query analysis and optimization of resources.
The client queries could be from either an ODBC / OLEDB client 100, a Web Client 102 such as MS
Internet Explorer or a Mail Client 104 such as Microsoft Exchange These queries can be any one of the queries such as an OQL query 106.
The Parser 108 comprises of an Object Parser 110, Object Interface 112 and a Dynamic Dictionary 114.
The Parser 108 breaks each incoming query into 'Object(s)', 'Qperation(s)', 'Option(s)' and 'Value(s)'.
The Object Parser 110 uses the Dynamic Dictionary 114 for parsing such queries. The Dynamic
Dictionary 114 contains a collection of the most frequently used objects, operations, options and values. The Object Interface 112 consists of the Interface available to use such objects.
The command analyzer 116 check the parsed object interface to relate operation(s) specified in the query and the various identifiers, which are yet to be validated as objects or sub objects. The validation of these identifiers in the object interface and with the database metadata using the requisite schema parameters transforms and asserts these identifiers as objects or sub objects. All information related to object or sub-object type is updated in the object interface tree. This information isolates the identifiers into objects and values (user options) from the user query. Also all dependencies and relationships
related to these objects are updated so that the constraints, rules, defaults or triggers can be executed if defined for the objects. Many SQL statements may have more than one operation to be performed. The sequence of execution, inter-dependencies of execution parameters (co-related) is analyzed by the Command Analyzer 116 and an execution priority plan is prepared for each separate query / sub-query to be executed. The command analyzer 116 uses the Global Cache 142 for the temporary storage. For each independent query a separate execution parent or child thread is forked and the Command Analyzer 116 coordinates synchronization of the final output data between these threads.
The Interface / Identifier Analyzer 118 validates operations to be performed on the objects with the interface options and values. The interface / identifier analyzer 118 uses the Global Cache 142 for the temporary storage. All post parsing unresolved identifiers, which have to be validated as 'object(s), 'sub object(s)', reserved characters, expressions etc is carried out by the Interface / Identifier Analyzer 118. Any unresolved identifier, which fails to be classified as valid literal, object, expression etc, triggers an invalid operation, operand or value and further the query processing is aborted.
Often as observed a major reason of heterogeneity is the vendor implementation of the same object functionality, using a proprietary jargon. Since the invention is designed for vendor independence and across functional servers the translation of any OQL queries into SQL statements and on any server specific objects across various vendor implementation features have to be taken care of. The vendor independent object analyzer 120 checks for any specific functionality expected on the object from the object interface 112 which may be syntactically same but functionally dissimilar across vendor implementation and according changes and links the associated modules (in the Object interface tree) to deliver the query specific functionality. The Vendor Independent Object Analyzer 120 has a Repository, which maps functionality and features of the object with respect to their addressing notations. The Object Interface 112 created by the Parser 108 is imparted functionality as per this interface implementation. The Parser 108 resolves the addressing of notational issues by mapping of these dissimilar notations into similar functionalities for which the implementation execution is switched or dictated by the vendor independent object analyzer. The vendor independent object analyzer 120
tags the objects and sub objects in the interface translator with attribute elements so that the callback function modules changes its behavior and vendor independence is achieved. These are mainly related to data type transforms or presentation masks, upper or lower limits of scales and precision, regional settings and other Unicode dependent data requirements. Often features or functionality supported for an object supported by one vendor is missing in another or many cases conflicting functionalities with respect to the same addressing convention may exist. Such issues are resolved as per the client vendor compatibility resolution expected that is the client ODBC connectivity has options which can dictate vendor standard adherence which according will be responded.
The Query Planner 128 does the plan generation in two forms, Logical Query Plan 130 and Physical Query Plan 132. Choosing the best execution path among the different paths and applying the physical factors generate the Physical Query Plan. In Logical plan, the query is rewritten and different paths are generated based on the factors such as filters, joins, indexes etc. The logical planning may require optimization or rewriting a query to minimize resources or time of execution. Often based on the state of the object during the execution of a concurrent transaction on the same object involved in the current query the physical plan may sequence or override the flow of the query planner during execution. The logical or arithmetic optimizations, which decide the execution flow is dictated by a set of rules embedded in the Repository. These rules can be updated and modified by the end user knowing all possible functional combinations of the DML agent that is this invention. The query planner 128 analyzes the sequence of arranged functions and deduces any logical or physical alternative methods of execution state based on set of rules or current state of objects involved in the operation. The query planner uses the semantic validator 122, expression validator 124, query optimizer 134 and the query transformer 136.
The object interface of a successfully syntactically parsed query will have unidentified words which are identifiers or values which have to be validated against user metadata. The isolation of these identifiers into user definable objects or vendor related reserved words have to be clearly associated into objects or functions and their functionalities, arguments and data types have to be judged against these
identifiers. Also may functional entities have return values different across vendors, which actually dictate the speed and convention by which these calculations are managed. The semantic validator 122 serves as a primary post-parsing hub where beyond syntactical compliance, semantic compatibility with respect to the objects in the database are checked and verified, across functional servers. For example the Size of any object cannot be negative or certain values may not accept floating points etc. as all these variables have to be translated and dynamically bound and converted from a single string data type entity (that is the original query) to sub entities of various data types as per object definitions. The Semantic Validator 122 thus resolves this functionality as per object entity independent of features supported by vendors.
The unique part of the expression solver 124 is the handling of syntactical and solving issues of expressions across functionally different servers, resolving dependencies and returning data in specific data types or variant arrays. Even most complex procedural language expressions consisting of user- defined functions can be used and resolved in the expression solver 124.
All static expressions which do not have a resolvable variable as part of the expression, operands are resolved by expression solver 124. Hence the expression which the DML agent handles are generally those which are depend on record value or set of logical / arithmetic conditions derived out of column entities that is the expression has to be calculated for every iterative logical condition specified in the query based on record count of the object.
Any word in the source SQL query string, which does not occur in the Parser 108 Dynamic Dictionary 114, is classified as an Identifier. During the course of validating identifiers in a query with the metadata objects and sub objects it is observed that higher the frequency of occurrence of the query in a multi user distributed network environment the larger is the unnecessary validation effort. Also any live time tested applications will have only data varying during a normal application usage but metadata (DDL of the application) remains fairly static. Hence these sub-objects or object names specific to application DDL, which remain initially as unresolved identifiers (for the first query occurrence) for the Parser 108 can be dynamically cached in conjunction with the Dynamic Cache Interface 126. In a multi schema
operation the same object or sub object name can have multiple occurrences. This is also resolved as per the current scope of schema login with respect to user grants, roles and privileges. Also most of the values, which varies for application object data definition, is validated by the front-end application Graphical User Interface (GUI). Hence chances of semantic failures because of object, options or values specified in a query are minimized. Also all object or sub object information, to perform operations is cached hence first operation on object is slightly costly with respect to resource utilization. Subsequently, such queries have almost all information embedded by the Parser 108 in the Object Interface 112. Hence, a lot of metadata validation is pushed to the Parser 108 and the present invention achieves more concurrency because of the Dynamic Cache Interface 126.
Query Optimizer 134 optimizes a query is required only if the query is badly written or the conditions and joins can be rearranged to reduce the same. Many a times developers skip checking query optimization due to intermittent changes in application design or the conditions specified in the query could have been evaluated using another column of the same object which was indexed and could give unique combinations or lesser intermediate resultset. The Query Optimization 134 is the process of generating plans for the query execution and choosing the best plan among them. The Query Optimizer 134 carries out optimization in two stages; namely, arithmetic or logical optimization (purely based on the conditions in the query) and physical execution optimization as per state of object, concurrent operations and cache.
Query Transformer 136 transforms a query if required when the expected data and persisted data are dissimilar and have to undergo change for presentation logic. Since the QQL engine can expect query from any client (web, mail, database) the response buffer demanded can be created in a format dictated by the client rather than the standard ODBC format. This module tunes the transformation of the query to suit the response demanded. The Query Transformer 136 does the transformation of the query as per the optimized query plan taking into consideration both the Logical Plan 130 and Physical Plan 132 as prepared by the Query Planner 128 and optimized by the Query Optimizer 134.
The Query Execution Engine 138 carries out the query execution according to the optimized plan that is both the Logical Plan 130 and the Physical Plan 132 as prepared by the Query Optimizer 134. Also the Query Execution Engine 138 takes the query from the Query Transformer and executes the Queries using the Index Agent 144. The sequence and number of modules executed to derive the result varies from query to query hence unlike linear programming (since current invention is state machine driven) the execution of code for each query varies and hence delivers better performance because lots of unnecessary validations and checks are never executed.
The Cursor and Cache Analyzer 140 is a sort of resource balancer between the sourcing of resultant buffer by the query execution engine 138 and sinking the buffer via network to the requesting client. This Cursor and Cache Analyzer 140 also uses the Global Cache 142 for the data transportation. The nature of the request dictates the resources utilization i.e. typically a forward only cursor does not demand data scroll ability hence the same buffer can be reused to create newer data buffers without allocating new memory blocks whereas dynamic or keyset type of cursor demands lots of resources and may require swapping to the secondary storage.
The function of the Index Agent 144 is to help the Query Execution Engine 138 in queries execution of elements used in conditions, which are indexed. The constraints of the conditions specified in query and attributes defined retrieve data fastest irrespective of volume of data or type of data.
The cached resultant data in Output Buffer as per the OQL executed query 146 may be routed to the requesting client rather than executing the query again. In many queries the output may have to be parsed and rendered to generate HTML if demanded with a stylesheet. The module uses the parser, which can parse SAX or DOM 145 defines stylesheets or DTD to create a HTML buffer as demanded by the source client from which OQL request was received. This resultant data can be routed either via the Dispatcher Agent 148 which acts like a temporary buffer or the Http Agent 150 in case of an http request.
Fig 2 is a flow diagram illustrating the stages of the query Execution in the preferred embodiment of the present invention.
The parser gives the syntactically correct parsed query as an object interface 300 to this invention. The proposed invention perceives any query in object oriented paradigm and decomposes a query into logical objects and associates a set of functional modules to deliver these functionalities. This information isolates the identifiers into objects and values (user options) from the user query. Also all dependencies and relationships related to these objects are updated so that the constraints, rules, defaults or triggers can be executed if defined for the objects. The parsed user query is an object interface, this object interface is used for the isolation of the identifiers from the object interface and the objects, sub-objects and values for operands are derived 302. Next after the completion of isolation and derivation from the parsed object interface 302, the source of the request for the operation is analyzed 304. Further the objects from the object interface are analyzed and the operations and objects are validated 306. In the event of invalid operation an error is notified 308, the validation of these identifiers in the object interface with the database metadata using requisite schema parameters transforms and assets these identifiers as objects or sub objects and all the relevant information related to object or sub-object type is updated in the object interface tree. In the event the objects are successfully isolated and analyzed 306 then the options associated with the operation are isolated, values linked and derive interdependencies, if any 310. After the successful isolation 310 then allocate resources and create links to manage co-ordination during execution of related operation 312 and then isolation of the sequence of the operation execution is done 314. On finding no interdependencies i.e. no co-relation 310, if more than one isolate the operation execution sequence and relate nesting 314. The vendor independent object analyzer isolates the options/ values and related call back functions or modules change its behavior such that vendor independence is achieved 316. Further the order or sequence of modification/ function execution in the order of least dependencies is identified 318. The query planning is done by analyzing this sequence of arranged functions and reduces any logical or physical alternative methods of execution based on set of rules or current state of objects involved in operation 330. The semantic validation is carried out to check whether the query has expressions and handling of these syntactical and solving issues across functionally different servers including resolving the dependencies
and returning the data in specific data types or variant arrays. For any query having expression 322, further the expression is checked for co-relation or dependencies 330. If there exists no pending dependencies or co-relation in the expression query, then the expression is resolved and result is derived 332. In the case the expression has pending co-relation or dependencies 330, then the resources are allocated and the object interface stack is updated with the results 334. In the event the query has no expression 322, then checks whether the query has SQL or datatype specific transformation/ translation functions 324. In the event the query has SQL or datatype specific transformation /translation functions 324, then checks whether the query has co-relation or dependencies 326. If the query has no further dependencies or co-relation 326 then proceed to allocate the resources and the update object interface stacks with results 334. In the event no dependencies or co-relations 326 are pending then resolve or execute functions and derive the results 328. The query transformation is required when the expected data and the persisted data are dissimilar and have to undergo change for presentation logic. After the query resolution, execution function and deriving upon the results 328, the query transformation such as simplifying the interdependencies or relation is possible through any existing arithmetic and logical rules 336. After successful query execution, the system checks whether the rewrite is necessary 338. In the event the rewrite is required, then the query is rewritten and any existing executions sequence (operations or conditions) are re-arranged 340, If no rewrite is required 338, allocate resources and execute query based on dependencies 342. After allocation of resources and execution of query based or dependencies 342, then checks whether any further iterative operations based on the expressions/ functions / dependencies or co-relations 344. For any query having iterative operations based on the expression / functions / dependencies or co-relations 344, then proceed to suitably adjust the iterative counter 346. Further after adjusting the iterative counter 346, then allocate the resources and execute query based or dependencies 342. For a query requiring no such operations 344 then prepare the final result buffer and allocate resources to cache or dispatch this data 348. Further after the preparation of the final results buffer and allocate the resources to cache or dispatch 348, the notification is carried out using the network agent or dispatcher to do the needful as per required operation 350.
The figure 3 is a logical flow diagram depicting the various stages of query planning. A syntactically correct parsed query when successfully emerges after the semantic validation is ready for execution 200. The resolution of each query has to be semantically validated by verifying each identifier in the query against object or sub-object(s) definition in query and their option values. After this query is parsed, the query is examined for the Validation check 202. After this query validation, the query is checked for the semantics 204. After the Semantic checking is over the query is proceeds to check for the Constraints 206. All the above steps are called the preprocess. The Vendor Independent Object Analyzer 120 translates and binds the operation, option and values with the object functionality, interpreting various vendor syntaxes.
The Query Optimization is planned firstly as logical plan 130 and next as a Physical plan 132. The optimization phase firstly includes the query execution cost with respect to the resource utilization is estimated 208. The query execution cost is estimated by choosing the factors such as, data block read/write, memory requirement, etc. After this process of query cost estimation, a query plan Left Deep Tree is prepared based on the relation cost 210. This Left Deep Tree plan is used in the execution of the query.
After the preparation of the Left Deep Tree based on the relations cost, the query is optimized by Pushing Down the projection list in the tree 212. Further, after the Push Down Projections, the query is optimized by Pushing Down the conditions, expressions and SQL Functions in the tree 214. Then, the query optimization proceeds to Joins, Indexes are identified 216 and path is changed accordingly. Queries, which are having Sub Queries, are merged 218 in to the parent query in case of correlated queries.
The Physical plan 132 involves identifying the optimized plan tree 220. After the Plan tree 220 is identified, the appropriate algorithm is applied 222. The algorithms used to process the query are Nested Loops 224, Simple Projection 226, Filter Projection 228, Merge Sort Joins 230, One Pass Sort Based 232 and Multi Pass Sort Based 234 algorithm.
The execution 238 part involves processing the query plan tree generated in the optimization stage. In this stage the relation's actual blocks are read either from disk or cache and different algorithms are used to process the nodes. The algorithms used to process the query are Nested Loops 224, Simple Projection 226, Filter Projection 228, , Merge Sort Joins 230, One Pass Sort Based 232, Multi Pass Sort Based 234.
After the query is passed through the preprocess and optimization such as Logical Plan 130 and the Physical Plan 132. The query is given to the execution phase 238 and the following steps are done on this query during execution time such as
A node from plan tree is obtained and the node is checked for Cartesian or Relation type. According to the Plan Tree, an appropriate algorithm is selected.
In Nested Loop Algorithm, Inner loop relation resides in memory and the outer loop relation is read block wise. As the relation is processed, intermediate blocks are prepared and they are committed to the disk. These steps are continued until all nodes in the tree are processed.
The result preparation phase involves the preparing a meaningful buffer from the blocks generated by the query execution phase. Everything is cleaned up at this phase and resources relinquished based on the cursor and request type.
The figure 4 is a screenshot depicting the query execution in the post-parsing scenario where the identifiers gets converted to object, sub-objects.
After the parser gives the syntactically correct parsed query as an object interface. The proposed invention perceives any query in object oriented paradigm and decomposes a query into logical objects and associates a set of functional modules to deliver these functionalities. As can be depicted from the
state and event map diagram fig. 4 the DML operations have been widely categorized into set of operations, objects, options and values.
As depicted in the figure 4, the query execution is carried out on operations such as the database commands namely SELECT 400, INSERT 405, UPDATE 410, DELETE 415 etc. The objects can be tables, views etc as mentioned in the queries and options can be conditions, expressions, SQL functions, which work on specific operands such as constants or data type columns. These are arranged and rearranged so as to optimize the sequence of execution of related modules using optimum resources in optimum time. Since the entire implementation of all complex data manipulation functionalities is implemented in state machines with each module functionally independent and strictly adhering to the Atomicity, Consistency, Isolation and Durability (ACID) properties, the sequence of calling convention or module chaining, is decided by the query planner and optimizer. This gives the best and minimum code to be iteratively executed to deliver the expected functionality from the user query.
Figure 5 lists the modules of the specific agent, total agent-wise resources timings displayed.
It clearly depicts the usage of resources used during various stages of execution (CPU, Memory, Disk, Network and Timer). This helps analyze query and gives agent better picture about execution of server code i.e. agent as per resources. Also the end user or developer can clearly visualize which of the stages of execution is the most costly and can be modified to reduce execution time and achieve maximum concurrency within given resource restrictions.
The figure 6 plots the timing of each module graphically and can be used to achieve balance or uniformity in resource usage so as to make the response time reasonably predictive and accurate.
Claims
1. A system for data manipulation, comprising:
a means for accepting said inputs in a plurality of formats;
an analyzing and mapping means to map functionality and features of the object with respect to their addressing notations;
a planning means to generate plurality of logical and physical plan; and
a processing means to execution said instructions according to said optimized plan;
whereby dynamically generating an optimized instruction execution plan using the available resource timings.
2. A system according to claim 1 , wherein said invention provides for means of command analysis.
3. A system according to claim 1, wherein said invention provides for means for analyzing the interface.
4. A system according to claim 1 , wherein said invention provides for said query planner using the query optimizer, query translation, semantic validator and expression solver.
5. A system according to claim 1 , wherein said plurality of optimization rules lie outside the bin.
6. A method for dynamically generating instruction execution, comprising:
decomposing a query into objects, operations, options and values; formulating execution plan based on physical and logical ;
arranging and rearranging the execution plan; and
execution of said instruction;
whereby achieving independence from the features and the syntactical variations supported by the various vendors, versions, products and hardware.
7. The method according to claim 6, wherein said invention supports query analysis, optimization and execution further comprising of:
using parsed query as object interface;
isolating identifiers and deriving objects/sub-objects and values;
analyzing objects and validating operations and object;
isolating the options for each of the operations;
managing co-relations and dependencies;
isolating the operation sequence; isolating options values;
sequencing of the module /function execution;
deducing logical and physical plan; and checks resource timings and then use the most optimized plan.
8. A method according to claim 6, wherein said invention provides for the data manipulation and extends it to any functional server.
9. A method according to claim 6, wherein said invention can be extended for web, database, mail etc.
10. A method according to claim 6, wherein said invention provides for extending transactional support to different servers.
11. A method for accepting and executing command(s) either on server-side or client-side irrespective of any functional server.
12. A method according to claim 6, wherein said invention imparts cross server properties to any of the servers.
13. A method for associating patterns of data resulting from patterns of functionally in a plurality of wide loosely coupled system and for a user defined relationships on any server objects.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
IN123MU2003 | 2003-01-30 | ||
IN123/MUM/2003 | 2003-01-30 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2004077217A2 true WO2004077217A2 (en) | 2004-09-10 |
WO2004077217A3 WO2004077217A3 (en) | 2005-05-19 |
Family
ID=32922937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IN2004/000028 WO2004077217A2 (en) | 2003-01-30 | 2004-01-29 | System and method of object query analysis, optimization and execution irrespective of server functionality |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2004077217A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1805608A2 (en) * | 2004-10-07 | 2007-07-11 | Quantitative Analytics, Inc. | Command script parsing using local and extended storage for command lookup |
US9952916B2 (en) | 2015-04-10 | 2018-04-24 | Microsoft Technology Licensing, Llc | Event processing system paging |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6076051A (en) * | 1997-03-07 | 2000-06-13 | Microsoft Corporation | Information retrieval utilizing semantic representation of text |
US6366905B1 (en) * | 1999-06-22 | 2002-04-02 | Microsoft Corporation | Aggregations design in database services |
-
2004
- 2004-01-29 WO PCT/IN2004/000028 patent/WO2004077217A2/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6076051A (en) * | 1997-03-07 | 2000-06-13 | Microsoft Corporation | Information retrieval utilizing semantic representation of text |
US6366905B1 (en) * | 1999-06-22 | 2002-04-02 | Microsoft Corporation | Aggregations design in database services |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1805608A2 (en) * | 2004-10-07 | 2007-07-11 | Quantitative Analytics, Inc. | Command script parsing using local and extended storage for command lookup |
EP1805608A4 (en) * | 2004-10-07 | 2010-03-03 | Quantitative Analytics Inc | Command script parsing using local and extended storage for command lookup |
US9952916B2 (en) | 2015-04-10 | 2018-04-24 | Microsoft Technology Licensing, Llc | Event processing system paging |
Also Published As
Publication number | Publication date |
---|---|
WO2004077217A3 (en) | 2005-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7984043B1 (en) | System and method for distributed query processing using configuration-independent query plans | |
US8296316B2 (en) | Dynamically sharing a subtree of operators in a data stream management system operating on existing queries | |
US10007698B2 (en) | Table parameterized functions in database | |
EP0723239B1 (en) | Relational database system and method with high availability compilation of SQL programs | |
US5940819A (en) | User specification of query access paths in a relational database management system | |
US7673065B2 (en) | Support for sharing computation between aggregations in a data stream management system | |
US8073826B2 (en) | Support for user defined functions in a data stream management system | |
US9305057B2 (en) | Extensible indexing framework using data cartridges | |
US7167848B2 (en) | Generating a hierarchical plain-text execution plan from a database query | |
US7143078B2 (en) | System and method for managed database query pre-optimization | |
US8224872B2 (en) | Automated data model extension through data crawler approach | |
US7689947B2 (en) | Data-driven finite state machine engine for flow control | |
US11314736B2 (en) | Group-by efficiency though functional dependencies and non-blocking aggregation functions | |
US9171036B2 (en) | Batching heterogeneous database commands | |
US11514009B2 (en) | Method and systems for mapping object oriented/functional languages to database languages | |
US11893026B2 (en) | Advanced multiprovider optimization | |
US9740735B2 (en) | Programming language extensions in structured queries | |
US20170161266A1 (en) | User defined function, class creation for external data source access | |
US7213014B2 (en) | Apparatus and method for using a predefined database operation as a data source for a different database operation | |
EP3293645B1 (en) | Iterative evaluation of data through simd processor registers | |
US11556533B2 (en) | Method for generating views based on a semantic model, that allows for autonomous performance improvements and complex calculations | |
WO2004077217A2 (en) | System and method of object query analysis, optimization and execution irrespective of server functionality | |
Böhm et al. | Model-driven development of complex and data-intensive integration processes | |
Luong et al. | A Technical Perspective of DataCalc—Ad-hoc Analyses on Heterogeneous Data Sources | |
WO2024063817A1 (en) | Producing natively compiled query plans by recompiling existing c code through partial evaluation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
122 | Ep: pct application non-entry in european phase |