US20200356568A1 - Pre-Emptive Database Processing For Performance Enhancement In A Hybrid Multi-Cloud Database Environment - Google Patents
Pre-Emptive Database Processing For Performance Enhancement In A Hybrid Multi-Cloud Database Environment Download PDFInfo
- Publication number
- US20200356568A1 US20200356568A1 US16/541,901 US201916541901A US2020356568A1 US 20200356568 A1 US20200356568 A1 US 20200356568A1 US 201916541901 A US201916541901 A US 201916541901A US 2020356568 A1 US2020356568 A1 US 2020356568A1
- Authority
- US
- United States
- Prior art keywords
- query
- computing platform
- database
- data
- queries
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3006—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24542—Plan optimisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
- G06F11/3433—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/211—Schema design and management
- G06F16/212—Schema design and management with details for data modelling support
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/217—Database tuning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
- G06F16/2228—Indexing structures
- G06F16/2255—Hash tables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
- G06F16/2282—Tablespace storage structures; Management thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2452—Query translation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24535—Query rewriting; Transformation of sub-queries or views
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24539—Query rewriting; Transformation using cached or materialised query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2453—Query optimisation
- G06F16/24534—Query rewriting; Transformation
- G06F16/24542—Plan optimisation
- G06F16/24545—Selectivity estimation or determination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24552—Database cache management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24553—Query execution of query operations
- G06F16/24554—Unary operations; Data partitioning operations
- G06F16/24556—Aggregation; Duplicate elimination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24553—Query execution of query operations
- G06F16/24558—Binary matching operations
- G06F16/2456—Join operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24553—Query execution of query operations
- G06F16/24561—Intermediate data storage techniques for performance improvement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2455—Query execution
- G06F16/24553—Query execution of query operations
- G06F16/24562—Pointer or reference processing operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2458—Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
- G06F16/2471—Distributed queries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/248—Presentation of query results
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
- G06F16/256—Integrating or interfacing systems involving database management systems in federated or virtual databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/20—Ensemble learning
-
- G06N5/003—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3409—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
- G06F11/3419—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3442—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for planning or managing the needed capacity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/80—Database-specific techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/501—Performance criteria
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2209/00—Indexing scheme relating to G06F9/00
- G06F2209/50—Indexing scheme relating to G06F9/50
- G06F2209/508—Monitor
Definitions
- FIG. 1 is a schematic block diagram of an environment in which methods may be practiced in accordance with an embodiment of the present invention
- FIG. 2 is a schematic block diagram of a system for generating and using a query performance model
- FIG. 3A is a process flow diagram of a method for generating a query performance model in accordance with an embodiment of the present invention
- FIG. 3B is a block diagram illustrating a machine learning approach to generating a query performance model in accordance with an embodiment of the present invention
- FIG. 4 is a process flow diagram of a method for using a query performance model in accordance with an embodiment of the present invention
- FIGS. 5A and 5B are diagrams illustrating different scenarios for processing a query in accordance with an embodiment of the present invention.
- FIG. 6 is a process flow diagram of a method for suggesting combinations and alternative processing for queries in accordance with an embodiment of the present invention
- FIG. 7 is a process flow diagram of a method for generating combined alternatives to queries at different levels in accordance with an embodiment of the present invention
- FIG. 8 is a process flow diagram of a method for implementing a combined alternative in accordance with an embodiment of the present invention.
- FIG. 9 is a process flow diagram of a method for evaluating combined alternatives in accordance with an embodiment of the present invention.
- FIG. 10 is a schematic block diagram of a system for automatically generating indexes and views of a database in accordance with an embodiment of the present invention.
- FIG. 11 is a diagram of data structures of a database in accordance with the prior art.
- FIG. 12 is a process flow diagram of a method for automatically generating indexes and views of a database in accordance with an embodiment of the present invention
- FIG. 13 is a schematic block diagram of a system for recommending data sources in accordance with an embodiment of the present invention.
- FIG. 14 is a process flow diagram of a method for recommending data sources in accordance with an embodiment of the present invention.
- FIG. 15 is a process flow diagram of a method for identifying redundant data sources in accordance with an embodiment of the present invention.
- FIG. 16 is a schematic block diagram of a system for tuning implementation of queries according to desired performance in accordance with an embodiment of the present invention
- FIG. 17 is a process flow diagram of a method for generating policies according to expected performance in accordance with an embodiment of the present invention.
- FIG. 18 is a process flow diagram of a method for implementing policies for achieving an expected performance in accordance with an embodiment of the present invention.
- FIG. 19 is a schematic block diagram of a computer system in accordance with the prior art for implementing systems and methods in accordance with an embodiment of the present invention.
- Embodiments in accordance with the invention may be embodied as an apparatus, method, or computer program product. Accordingly, the invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Furthermore, the invention may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.
- a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device.
- a computer-readable medium may comprise any non-transitory medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- Computer program code for carrying out operations of the invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages, and may also use descriptive or markup languages such as HTML, XML, JSON, and the like.
- the program code may execute entirely on a computer system as a stand-alone software package, on a stand-alone hardware unit, partly on a remote computer spaced some distance from the computer, or entirely on a remote computer or server.
- the remote computer may be connected to the computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- These computer program instructions may also be stored in a non-transitory computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- the methods disclosed in this application may be practiced in a database environment 100 having some or all of the illustrated components.
- the database environment may include databases implemented on multiple computing platforms 102 a - 102 c , such as a first cloud computing platform 102 a (AMAZON WEB SERVICES (AWS), MICROSOFT AZURE, GOOGLE CLOUD, etc.); a datalake 102 b implemented on a second cloud computing platform, on-premise computing equipment, or other computing platform; and an on-premise computing platform 102 c.
- the computing platform 102 a - 102 c may have various capabilities, capacities, performances, and costs (monetary or computing) associated therewith.
- some computing platforms 102 a - 102 c may provide only basic database functions such as retrieval of data referenced in a request. Others may provide database computations such as table joins, aggregations, or other complex computations.
- a cloud computing platform 102 a may provide both storage and computational resources that may store a database, implement data access, and perform arbitrary computations with respect to a database.
- a datalake 102 b may be embodied as a distributed file system storing disparate datasets in multiple formats.
- the datalake 102 b may store data in various storage structures such as relational, object, non-relational, text, hierarchical, and the like.
- a datalake 102 b may also provide some parallel computation facilities e.g., MapReduce, Impala, Spark SQL, Hive LLAP, or the like.
- Datalakes 102 b may have varying processing capabilities and latencies.
- the datasets of a datalake 102 b are typically large with limited pre-defined meta-data structures defining the datasets.
- the computing equipment implementing a datalake 102 b may be on-premise or in a cloud computing platform 102 a.
- the engine 104 may be programmed to interface with the datalake 102 b and to process data in various formats stored in the datalake 102 b.
- An on-premise computing platform 102 c may be a computing platform in which database software executes on computing hardware typically without a virtualization layer such as would be present in a cloud computing platform 102 a.
- Each of the computing platforms 102 a - 102 c may implement databases according to various different implementations such as ORACLE, SQL, MYSQL, IBM DB2, SNOWFLAKE, or the like.
- the databases implemented may also include non-relational storage systems, such as S3, Hbase, and the like.
- the databases implemented by some or all of the computing platforms 102 a - 102 c may be sources of record for the data stored thereon and therefore store the current and authoritative copies of the data stored thereon.
- access to the various computing platforms is performed through a scalable query engine 104 (“engine 104 ”) that implements interfaces to the various computing platforms 102 a - 102 c and database implementations implemented on the computing platforms 102 a - 102 c.
- the engine 104 may further interface with the cloud computing platforms to scale out and scale in computing resources allocated to storage of a database or computation with respect to a database.
- the engine 104 may do so by being programmed to use an application programming interface (API) or other interface exposed by a computing platform 102 a - 102 c or database implementation along with programming to monitor usage and perform scaling out and scaling in based on usage.
- API application programming interface
- the environment 100 may further include a unified access layer 106 .
- the unified access layer 106 may provide an access point by which disparate user computing devices and users may access the engine 104 .
- the unified access layer 106 may be programmed to interface with such applications using APIs exposed by these applications. In this manner, access by multiple applications across an enterprise may be monitored and evaluated according to the methods disclosed herein in order to improve and/or manage performance of query processing.
- the unified access layer 106 may receive queries and provide response to queries to a user interface 108 a that may be a general purpose interface for receiving database queries, providing responses to database queries, and for presenting suggestions and representations of objects generated according to the methods disclosed herein as well as receiving user responses to these suggestions and objects.
- the user interface 108 a may implement individual user accounts or dashboards that enables each user to create unique views of the databases implemented by the computing platforms 102 a - 102 c.
- activities of multiple users may be aggregated in order to improve performance of queries and to automate generating of views that are of interest to a team of users as discussed below.
- the unified access layer 106 may further interface with a data science interface 108 b (e.g., notebook application) for running experiments and making inferences with respect to large amounts of data.
- the unified access layer 106 may further interface with a business intelligence tool 108 c that performs queries in order to obtain performance indicators, reports, or other information from data describing operation of an enterprise or a target market.
- the unified access layer 106 may further access and implement policies 110 with respect to queries from the interfaces 108 a - 108 c.
- These policies 110 may be identity based: certain users or organizations are permitted to access certain databases or tables within databases.
- These policies 110 may be resource based: certain users or organizations are permitted to user a certain amount of a particular resource such as storage space or computing time of a particular computing platform 102 a - 102 c.
- These policies may be money based: operations by a certain user or organization may be permitted to perform queries from an external computing platform 102 a - 102 b that cost up to a specified budget in a specified time period (e.g., month).
- the policies 110 may further manage priorities, e.g.
- one interface 108 a - 108 c may be given a higher priority than another in the policies 110 .
- Policies 110 may also be capability based, i.e. certain computing platforms 102 a - 102 c are capable of performing certain operations.
- policies 110 may indicate which of the following a computing platform 102 a - 102 c is capable of performing: execute a project, filter on a read, process a full query, implement parallel computation (e.g., MYSQL v. SNOWFLAKE).
- Other capabilities may also include ability to create remote tables and the ability to create remote materialized views.
- Policies 110 may be scale based, i.e. indicate an amount of data that may be stored or processed by a computing platform 102 a - 102 c, an amount of computation per unit time that may be performed by a computing platform 102 a - 102 c or some other representation of the computing capacity of the computing platform 102 a - 102 c.
- Policies may specify scheduling availability and constraints for a computing platform 102 a - 102 c, e.g. an amount of resources available for queries of a user, organization, or interface 108 a - 108 c that may be scheduled to execute within a given time period. For example, this may include definitions of blackout periods, busy periods, or low usage periods for a particular computing platform 102 a - 102 c.
- Policies 110 may include a user-specified preference among sources 102 a - 102 c, such as a preferred cloud storage provider 102 a, a preferred cloud data base provider (SNOWFLAKE, REDSHIFT, etc.), or the like, for pushing computation to according to the methods described herein.
- a monitoring module 112 a may detect both a query output by the unified access layer 106 and a response to that query thereby enabling the monitoring module 112 a to detect overall latency of a query and its response.
- Monitoring modules 114 a - 114 c may likewise each detect a query to a computing platform 102 a - 102 c and its response and use this to determine its latency. Monitoring modules 114 a - 114 c may monitor other aspects of a query, such as an amount of computing time used by the computing platform 102 a - 102 c to process the query. For example, such a metric may be reported by the computing platform 102 a - 102 c with the response to the query.
- Monitoring modules 114 a - 114 c may monitor any aspect of resource consumption by a query such as bandwidth consumption. This may include network capacity consumed transmitting data between the scalable query engine 104 and a source 102 a - 102 c as part of processing of a query. Likewise, parallel processing of a query may result in data transfer locally within a particular sources 102 a - 102 c. Accordingly, amounts of data transfer within a particular source resulting from a query may also be monitored and reported to the monitoring modules 114 a - 114 c. In some implementations, network bandwidth is purchased such that bandwidth consumption may be transformed to a monetary cost according to the cost per unit of data transmitted.
- a source 102 a - 102 c may record an amount of disk usage (amount of data written to and read from one or more disks) resulting from a query and report this to the monitoring modules 114 a - 114 c.
- Monitoring modules 112 , 114 a - 114 c may further record other aspects of queries in addition to latency and resource consumption. For example, some or all of the following metrics may be recorded for a query:
- Object usage (tables, views, remote calls, etc. referenced in the query or performed in response to the query)
- Monitoring modules 112 , 114 a - 114 c may further capture data describing data accessed by queries. Such data may include data demographics of data accessed, such as:
- Data volume of a query (record count, byte sizing, column count, etc.)
- Distributions (histograms of one or more columns of tables, cardinality)
- Temporal information (how often data is updated, how often should the system update it (e.g., in a cache))
- Data from the monitoring modules 112 , 114 a - 114 c may be stored as metrics 116 and used according to the methods described herein. Some or all of the metrics from the monitoring modules 112 , 114 a - 114 c as described above may be used for billing purposes, i.e. to determine resources consumed by a user in order to determine an amount owed by the use.
- the metrics 116 may also be used for capacity planning and reporting.
- the scalable query engine 104 may suggest acquiring of additional computation or storage resources on a cloud computing platform 102 a in response to the usage information.
- the scalable query engine 104 may expand storage and computation resources allocated to the scalable query engine 104 in response to usage data. Usage data may also be summarized and reported to an administrator of a system for planning or other business purposes.
- the metrics 116 may also be used to evaluate usage with respect to constraints on the resources reported by the metrics. In particular, there may be constraints on network bandwidth between the scalable query engine 104 and a source 102 a - 102 c.
- the environment 100 is a hybrid multi-cloud database environment 100 .
- the environment 100 includes non-heterogeneous data management and analytic computing platforms 102 a - 102 c (silos) as sources.
- the scalable query engine 104 and unified access layer 106 that perform methods disclosed herein are not sources of record and manage the caching, movement, and accessing of data on the various computing platforms 102 a - 102 c and performs, or invokes performing of, computation against data managed by other source systems of record
- the unified access layer 106 provides a unified, personalized views of databases in the systems of records and facilitates analytics data access and collaboration between individuals and teams.
- Various approaches are described below. In particular, approaches are described for monitoring the entire set of resources on platforms 102 a - 102 c and improving their utilization from an individual user's perspective. Approaches are further described for facilitating and accelerating implementation and execution of analytics and data science workloads.
- the unified access layer 106 and scalable query engine 104 (“the virtualization platform”) may be used to perform methods described herein.
- the virtualization platform may be used to create a virtualized access layer for individuals and small teams to collaboratively create personalized perspectives of data assets in any size enterprise (See FIGS. 6-10 and corresponding description).
- the virtualization platform need not be, and in many instances will not be, an enterprise's data system of record but works with data management systems of record (e.g., 102 a - 102 c ) irrespective of where the data management system resides.
- Data assets on these systems of record 102 a - 102 c can be accessed and personalized regardless of where they reside including a cloud computing platform 102 a, multiple cloud computing platforms 102 a, or a combination of premise computing equipment 102 c and one or more cloud computing platforms 102 a.
- the virtualization platform may virtualize access to a wide variety of data management systems implemented on any of these computing platforms 102 a - 102 c that may include some or all of on-premise data warehouses, cloud data warehouses, document databases, relational databases, distributed and non-distributed file systems as well as object stores.
- the virtualization platform will automatically interrogate and integrate with the target systems catalog, and management interfaces (see discussion of FIGS. 11 and 12 , below).
- the virtualization platform may provide computational elasticity, workload management, and many performances enhancing and optimizing techniques (see discussion of FIGS. 13 and 14 , below).
- the virtualization platform may also integrate into an existing enterprise eco-system.
- the virtualization platform may also integrate into common security layers of various data management systems that may be implemented on the computing platforms 102 a - 102 c.
- the virtualization platform may also account for different security of computing platforms 102 a - 102 c as recorded in the policies 110 , e.g., prevent data from a source being cached on another source lacking appropriate data protections.
- the virtualization platform may also integrate with common collaboration applications and project management tools.
- the virtualization platform may adapt and make intelligent choices. For example, it may work to migrate workloads onto an available platform based on user criteria with respect to capability, time, cost, available resources, and service level agreements (see discussion of FIGS. 16-18 , below).
- the virtualization platform may infer information with respect to data relationships, usage patterns, lineage, etc. of received queries. These inferences allow the virtualization platform to assist the process of identifying and personalizing by means of suggestions (or enactment) of alternative usage patterns, optimizations, and sources (see discussion of FIGS. 6-9 and FIGS. 13 and 14 , below).
- FIG. 2 illustrates a system 200 for using metrics 116 gathered as described above.
- the metrics 116 and policies 110 may be processed by a model builder 202 in order to generate a query performance model 204 .
- the system 200 may be implemented on one or more computing devices implementing the engine 104 and unified access layer 106 .
- the query performance model 204 relates attributes of a query to its performance in terms of latency, computing resources consumed (memory used, processor time/cycles used, amount of data transmitted over a network), and possibly monetary cost on a given computing platform). Attributes of a query may include a data volume referenced (e.g., size of tables referenced by the query) and complexity (number and types of computations included in the query such as aggregations).
- a query from an interface 108 a - 108 c has its attributes and its overall performance in terms of total latency, total computing resources consumed, and possibly a total monetary cost. That same query may result in multiple sub-queries to multiple computing platforms 102 a - 102 c, each sub-query having metrics relating its own attributes to latency, consumed computing resources, and possibly a monetary cost.
- the model builder 202 may include a machine learning model or other algorithm that generate one or more models relating the attributes of a query to one or more metrics of performance (latency, computing resource, monetary cost). For example, there may be an overall model obtained by evaluating the overall metrics of queries. There may be individual models for each computing platform 102 a - 102 c computed for queries and corresponding metrics for the each computing platform 102 a - 102 c.
- the model builder 202 may operate “offline,” such as at off-peak times or on a different computing resource than the unified access layer in order to avoid impacting production processing of queries.
- the model builder 202 or some other module may use the query performance model 204 to generate suggestions 206 that may be presented to a user by way of the user interface 108 .
- Suggestions 206 may be filtered according to the policies 110 that may limit which suggestions are permitted to be implemented.
- the suggestions 206 may indicate ways to improve an aspect of the performance (latency, computation resources, monetary cost) of a query based on the query performance model 204 .
- the user interface 108 a may further include tuning parameters 208 from the user in response to the suggestions that modify one or more aspects of processing of a query in order to improve an aspect of performance as described below with respect to FIGS. 3 through 5B .
- tuning 208 based on the query performance model 204 may also be performed according to the approach described below with respect to FIGS. 16 through 18 .
- FIG. 3A illustrates a method 300 that may be executed in the database environment 100 using the system 200 .
- the method 300 may include receiving 302 a query by the unified access layer 106 from an interface 108 a - 108 c.
- One or more attributes of the query may be logged 304 , such as time of receipt, a metric of complexity (number of computations, computational complexity of individual computations), a volume of data (size of table, number of rows, number of columns, etc.) referenced by the query, or other attribute of the query.
- the query may then be parsed 306 into individual data access and computation operations and these operations may be executed 308 with respect to sources of data on one or more of the computing platforms 102 a - 102 c.
- a computing platform 102 a - 102 c may host a database that is the source of record for data referenced by the query.
- step 308 may be performed according to an A/B test 310 .
- data access may include retrieving (“fetching”) and caching data from a remote computing platform 102 a - 102 b on a premise computing platform 102 a.
- data access may be performed without caching.
- a computation with respect to data may be performed on the premise computing platform 102 c or may be moved (“pushed”) to a cloud computing platform 102 a - 102 b that stores the data with respect to which the computation is being performed. Accordingly, a computation of a query may be pushed to a source or data for a query may be fetched from a source based on its attributes according to an A/B experimental plan such that the performance metrics for queries having various attributes may be determined for each data access and computation type.
- the method 300 may include performing 312 for each operation from step 308 , logging 314 one or more values such as latency, computing resources consumed, data volume, operation type (data access, join, aggregation operation, etc.), or the like.
- Latency for an operation may include an elapsed time between transmitting of a query to a source 102 a - 102 c and receiving a response to the query, e.g. the engine 104 acknowledges completion of the query to the source 102 a - 102 c.
- the method 300 may further include logging 316 a time of completion of the query received at step 302 , e.g. a time at which all of the operations from step 306 are executed 308 and an overall response to the query is returned to the unified access layer 106 .
- Other metrics described above see description of monitoring modules 112 , 114 a - 114 c ) that are captured for a query may be collected at step 316 .
- the method 300 may include creating 318 records in the metrics 116 for the data logged at steps 314 and 316 .
- the record may include attributes of the query from step 304 , the latency from step 316 , and computing resources that are an aggregation of the computing resources logged 314 for the individual operations of the query.
- the record may include the attributes of the operation, latency, and computation resources consumed as logged at step 314 for the operation.
- the records of step 318 may be augmented 320 , 322 with additional data either before or after creation 318 .
- a monetary cost of an operation may be computed based on consumed computing resources and a known cost function. This monetary cost may be added 320 to the record of the operation and the aggregate monetary cost of operations of an overall query may be added 320 to the record of the overall query.
- the policy data 110 may impose additional cost on an operation.
- a peak time period may be defined in which the cost of operations is artificially inflated in order to bias operations toward execution outside of the peak time period or to penalize operations that are of lower priority.
- records of operations during the peak time period may be assigned 322 an additional “cost” value based on computational resources consumed and independent of monetary cost that will be used to train the model builder 202 to balance usage of that source 102 a - 102 c during the peak time period.
- the method 300 may then include processing 324 the metric records by the model builder 202 in order to create the query performance model 204 .
- this may include creating an overall model and models for individual computing platforms 102 a - 102 c that relate attributes of a query to one or more performance metrics (latency, computational resources, monetary cost).
- Step 324 may include using any statistical algorithm, machine learning algorithm, or artificial intelligence algorithm (e.g., neural network, deep neural network, convolution neural network, etc.) known in the art.
- each record may be viewed as an item of training data including the query attributes as an input and the performance metrics as desired outputs.
- the machine learning algorithm may then be trained to produce the desired output for a given input according to any approach known in the art.
- FIG. 3B illustrates an example approach 326 for generating 324 the query performance model 204 using machine learning.
- Training data entries 328 may be generated using the data assembled as described above with respect to FIG. 3A .
- each entry 328 may correspond to a query.
- a query feature set 330 may be assembled that includes features such as volume of data referenced by the query, the metric of query complexity, which interface 108 a - 108 c generated the query, which computing platform 102 a - 102 c performed an operation in the query, whether data referenced by the query was cached and where it was cached, whether computation was pushed to a particular computing platform 102 a - 102 c, or other attributes of the query.
- the training data entry 328 may further include policy feature set 332 that includes data describing policies in effect when the query was executed, such as the policy data added at step 322 .
- policy feature set 332 that includes data describing policies in effect when the query was executed, such as the policy data added at step 322 .
- some or all of the computing platforms 102 a - 102 c that are used to execute the query may have policies that reflect resource limitations (computing time, memory, data volume) that are either artificially imposed or reflect physical limits of available computing and storage hardware. Accordingly, these policies may affect the latency or other metric of performance of the query and may be included in the training data entry 328 .
- the training data entry 328 may include the query metrics 334 or data derived from the query metrics 334 , e.g. the metrics obtained for the query at some or all of step 314 , 316 , and 320 . Accordingly, the feature sets 330 , 332 become the input for the entry 328 and the query metrics 334 are a desired output that the machine learning model is trained to produce.
- training a machine learning model according to the training data entries 328 may include training multiple machine learning models.
- training data entries 328 may be divided into sets of entries by a sampling algorithm 340 .
- the sampling algorithm may be any sampling algorithm known in the art.
- Each set of entries may then be trained by a machine learning algorithm 342 a - 342 c .
- the result of this is a set of machine learning models 344 a - 344 c that are each trained to predict query metrics 334 based on input feature sets 330 , 332 for the set of training data entries 328 that were used to train it.
- One of the models 344 a - 344 c or a combination thereof may be selected as a selected model 346 .
- a given input query feature set and policy feature set of a query may be input to the models 344 a - 344 c each of which outputs a prediction for one or more corresponding query metrics that would result from the query feature set and policy feature set.
- a selection algorithm 348 may be used to either (a) identify a most accurate of the models 344 a - 344 c as the selected model 346 or (b) generate a model 346 for selecting among the models 344 a - 344 c for given input feature sets 330 , 332 .
- selecting the most accurate model 344 a - 344 c may include processing a set of data entries that is different from the sets of entries used to train the models 344 a - 344 c (“the validation set”).
- the model 344 a - 344 c with the most accurate predictions (corresponding most closely to the actual metrics 334 of the validation set) may then be selected as the selected model 346 .
- multiple models 344 a - 344 c may be selected for use in combination. For example, an average or weighted average of multiple models 344 a - 344 c.
- Other aggregations are possible, e.g. selection of the output of the multiple models 344 a - 344 c that is most frequent among the outputs of the multiple models for a given input.
- the machine learning algorithm 348 may train the model 346 .
- Training entries for the machine learning algorithm 348 may each include outputs of the models 344 a - 344 c for a query feature set 330 and policy feature set 332 of a query as inputs and the query metrics 334 for that query as a desired output.
- same or different training data entries 328 may be used for training the selection model 346 as were used to train the models 344 a - 344 c.
- the model 346 therefore is used to select among outputs of the models 344 a - 344 c in production.
- the machine learning models 344 a - 344 c and model 346 may be embodied as any machine learning algorithm known in the art such as decision tree, clustering, Bayesian network, genetic, deep neural network, convolution neural network, or the like.
- FIG. 3B is exemplary only and other approaches may be used.
- FIG. 3B illustrates an example of boosting in order to improve the performance of a machine learning algorithm.
- any boosting algorithm known in the art may be used.
- the models 344 a - 344 c are models with different configurations for hyper parameters that are each separately trained with the selection model 346 being trained to select or combine the outputs of the models 344 a - 344 c as described above.
- FIG. 4 and FIGS. 5A and 5B illustrate a method 400 for using a query performance model 204 to process individual queries.
- FIGS. 5A and 5B illustrate various query plans illustrating where data access and computation may be performed for a query having a given query model.
- the method 400 may include receiving 402 a query and parsing 404 the query into individual operations (data access and computation) that constitute the query model of the query.
- FIG. 5A shows a query composed of computational operation A that operates with respect to the results of computational operations B and C.
- Operation B processes results of a data access D with respect to a data source S 1 and a data access E with respect to a data source S 2 .
- Operation C processes result of a data access F with respect to a data source S 3 .
- computations A, B, and C are performed by a local computing platform P.
- Computations may include some or all of joins, sub-queries, groupings, aggregations, or other database computations.
- sources S 1 and S 3 may be remote from the local computing platform P.
- the sources S 1 -S 3 and local computing platform P may be embodied as any of the computing platforms 102 a - 102 c. This is just an example, and operations may be arranged in a hierarchy of any number of levels with respect to any number of data accesses.
- Local computing platform P may be any of the computing platform 102 a - 102 c that is executing one or both of the engine 104 and unified access layer 106 .
- Sources S 1 -S 3 refer to sources of record for databases or other types of data that are on a different computing platform 102 a - 102 c than the local computing platform P such that decisions may be made regarding pushing of computation to the source S 1 -S 3 and fetching of data from the source S 1 -S 3 to the local computing platform P as described below.
- the method 400 may include identifying 406 possible alternative sources for performing each operation.
- FIG. 5A shows one set of alternatives with computation being performed on the premise P and data accesses being performed on the sources S 1 -S 3 storing the data being accessed.
- FIG. 5B shows other alternatives.
- data access D may be performed by first caching the data on the premise P.
- a computation B with respect to data from data access E of data on source S 2 may be performed with computation on premise P with data access remaining on the source S 2 .
- Computation C with respect to data on source S 3 may be shifted to source S 3 such that the computation C and the data access F for data processed according to computation C are all performed on the source S 3 .
- data may be transferred from a first source to a cache on a second source (e.g., S 2 to S 3 ) that is not the source of record for the data such that one or both of computation and data access are performed on the second source.
- Possible alternatives may be filtered with respect to policies 110 .
- some data may be prohibited from being cached on local computing platform P or on a different source than its current source due to privacy or other concerns.
- Some sources S 1 -S 3 may have limited computational capacity and may be used only for data access. Some sources S 1 -S 3 may be unavailable for performing computation at peak times. Any other policy may limit possible alternatives as determined by a human operator or limitations of a particular source S 1 -S 3 .
- alternatives and their predicted metrics are determined without first filtering. Accordingly, where an alternative has high performance relative to others (e.g., highest performance or top N highest performance where N is an integer) but is prohibited by policy it may still be suggested along with a suggestion to change the policy in order to permit
- the method 400 may include determining 408 metrics for each alternative according to the query performance model 204 .
- an alternative may be represented as an operation having various attributes and a source that may be used to perform that operation. Note that where an alternative is a data access performed by first caching on a different source (local computing platform P or a different source S 1 -S 3 ), the data transfer from of the source to a different source may be performed prior to performing the operation.
- caching on the local computing platform P is performed and subsequent queries referencing data that is cached are then executed with respect to the cached data. Accordingly, the latency of data transfer to the cache may be omitted when estimating metrics for the alternative at step 408 .
- Determining 408 the metrics may be performed by applying the query performance model 204 to the attributes of the operation (data volume, computation type) and the configuration of the alternative (where performed).
- the model corresponding to that source may be applied to the attributes of the operation to determine estimated performance (latency, computation resources consumed, monetary cost) of performing that operation using that source.
- the model may be a model trained to relate performance to that type of operation, i.e. caching to the local computing platform P from that specific source S 1 -S 3 by evaluating previous data accesses performed in that same manner for the local computing platform P and that specific source S 1 -S 3 .
- Step 408 may include inputting a feature set for the alternative to a machine learning model trained as described above with respect to FIG. 3B .
- current policies 110 may be input as a policy feature set to this machine learning model as described above with respect to FIG. 3B .
- the machine learning model then outputs a predicted performance for that alternative. Note that where the machine learning model is trained with a policy feature set, the predictions may take into account changes in policy (e.g., increasing or decreasing of computing or storage resources such as adding more computing nodes).
- the method 400 may further include implementing 410 or suggesting implementation of an alternative for an operation that has improved estimated performance relative to other alternatives for that operation.
- a default query plan for a query may be as shown in FIG. 5A with data accesses performed at sources S 1 -S 3 and all computation performed on the local computing platform P. If caching for a data access on the local computing platform P or on a different source S 1 -S 3 is determined to provide improved performance for a data access operation, then an alternative query plan (e.g., see FIG. 5B ) may be automatically implemented or suggested to the user, such as by way of a notification in the interface 108 a.
- a computation to a source S 1 -S 3 may be automatically implemented or suggested to the user, such as by way of a notification in the interface 108 a.
- the alternative represented by the suggestion may then be implemented: e.g., data may be retrieved and cached or computation may be pushed according to the suggestion.
- Alternatives of type (1) may be implemented by the scalable query engine 104 that invokes execution of an operation on the local computing platform P or a remote source in order to achieve improved performance (lower latency, lower consumption of computing resources, lower cost).
- alternatives of type (1) may be performed in combination with other known query optimization techniques (e.g., SPARK) in order to improve the performance of queries.
- alternatives of type (1) may be performed by a subsystem of the engine 104 that where data and computations should be located in order to improve performance and to meet user's expected performance/resource targets (see discussion of FIGS. 16 through 18 , below).
- policies 110 may limit or direct out alternatives of any of types (1), (2), and (3) are performed and may result in execution of queries, implementation of caching, and structuring of views that is sub-optimal.
- Alternatives of type (2) may be implemented by the unified access layer 106 that invokes caching of data on the premise P or a source other than the source of record or the cached data. This may include generating materialized views and maintaining coherency of cached data and materialized views created according to the methods disclosed herein.
- the unified access layer 106 may also translate queries to reference the location of the cached data rather than a source of record. Translation of queries may also be performed in order to better use the computing platforms 102 a - 102 c and other computing resources of the premise P.
- Alternatives of type (3) may also be implemented by the unified access layer 106 , which can generate views spanning multiple computing platforms 102 a - 102 c due to its virtualization of access of databases on them.
- the unified access layer 106 may influence the manner in which the scalable query engine 104 improves the performance of queries. For example, the unified access layer 106 may inform the scalable query engine 104 of the availability of cached versions of data, views, materialized views, or other alternative representations of data generated according to the methods disclosed herein. In some embodiments, queries passed to the scalable query engine 104 may be annotated with “hints” that indicate which physical representation of data (source of record, local cache, remote cache, materialized view, etc.) to use when executing the query in order to improve performance.
- hints that indicate which physical representation of data (source of record, local cache, remote cache, materialized view, etc.) to use when executing the query in order to improve performance.
- FIGS. 4 and 5A and 5B further illustrate the hybrid multi-cloud approach that may be implemented using the unified access layer 106 and engine 104 .
- queries may span multiple cloud computing platforms and a local computing platform in a hybrid fashion that is transparent to the user.
- cloud-computing platforms from multiple providers that are accessed in this manner, e.g., AWS, GCP (GOOGLE Cloud Platform), AZURE, and the like.
- FIG. 6 illustrates a method 600 that may be executed by the system 200 .
- the method 600 may be used to (a) streamline usage of an individual user that may include redundancies and/or (b) facilitate collaborative usage by multiple users in order to improve performance.
- processing is described with respect to overlapping queries of multiple users. Overlap among the queries of an individual user may be identified and used to improve performance in the same manner.
- the method 600 may include receiving 602 queries, such as by way of any of the interfaces 108 a - 108 c from a plurality of users and evaluating 604 the structure of the queries.
- a query may be represented as a hierarchy or query model (see FIG. 5A ) including a plurality of nodes such that each node is either a data access or a computation.
- a node may have child nodes representing sources of data processed by that node. Accordingly, a node may take as inputs a data access of a child node or a result of computation by a child node.
- the queries may be evaluated 606 to identify source calls made as a result of the query.
- the source S 1 -S 3 called, the tables accessed, the columns of tables accessed, or other data regarding data accessed as a result of a query may also be determined at step 606 .
- the method 600 may further include identifying 608 overlap.
- Overlap may be identified in the structure, e.g. the query model of a first query has identical nodes (same computation) and relationships among nodes as a second query, or in the source calls, e.g., the source calls of a first query reference a same table, column of a table, or other data as a second query. Note that complete overlap is not required but rather some partial overlap may also be identified at step 608 .
- the method 600 may further include generating 610 a combined alternative according to the overlap. Generating a combined alternative may be performed according to a degree of overlap as shown by the method 700 of FIG. 7 .
- Queries may be found 702 to overlap at the structure level in that they have overlapping query models: both overlapping sources (same tables from same source) and computation (same nodes and relationships among nodes).
- a combined alternative may be created 704 as a query object defining the overlapping portion of the queries.
- Queries may be found 706 to overlap at the operation level, such as having a join of tables in common or a common aggregation of one or more common tables from common sources. Accordingly, the combined alternative may be created 708 as an index or view object that includes that operation (join or aggregation) of the one or more common tables.
- the query object may be an expanded query that includes first dimensions of a table included in a first query that overlaps second dimensions of a second query that does not include the first dimensions.
- Queries found 710 to overlap at the source level may result in a combined alternative created 712 as a materialized (cached) view of all columns of the table referenced by the overlapping queries.
- Overlapping queries that access and possibly perform computation with respect to multiple tables across multiple sources S 1 -S 3 can result in a combined alternative according to 704 and 708 that is a query object or view object referencing the multiple sources and including any overlapping computation.
- a combined alternative for multiple users may be based on interactive feedback received from the multiple users.
- the multiple users may collaborate by way of the interface 108 a in order to propose combined alternatives that may then be implemented in the same manner as automatically-generated combined alternatives according to the methods disclosed herein.
- the method 600 may further include evaluating 612 the combined alternatives with respect to the query performance model 204 .
- estimated (per model 204 ) or measured performances of the individual queries may be compared to an estimated performance of the combined alternative.
- the overlap among the queries may include one or more overlapping operations including data accesses from sources and computation. Accordingly, the measured or estimated performance of performing the overlapping operations separately may be compared to the estimated performance of performing the combined alternative.
- the combined alternative has attributes indicating the data volume and operation type of the combined alternative. These attributes may be processed by the query performance model 204 to determine an estimated performance.
- the method 600 may include evaluating 614 whether the performance of the combined alternative is better (lower latency, lower consumption of computational resources, lower monetary cost) then the combined performance of the separate overlapping operations (e.g. sum of latencies, sum of consumed computational resources, sum of monetary costs). If so, then the combined alternative may be determined to be an improvement and may be suggested 616 to a user or automatically implemented.
- the displayed suggestion may display the expected improvement from the combined alternative, e.g., a change in value of one or more metrics determined to be improved at step 614 .
- An example of how step 616 may be performed is described below with respect to FIG. 8 .
- the method 800 may be executed by the system 200 in response to identifying a combined alternative providing a performance benefit as described above with respect to FIG. 7 .
- the method 800 may include creating 802 a combined object according to the combined alternative. Note that this object is not an implementation of the combined alternative but rather an object containing parameters sufficient to define the combined alternative such that it can be implemented later. Accordingly, where the combined alternative is a result of combining queries at the query level, the object may include the query itself defining operations performed and with respect to what tables in what sources. The object may, for example, include the query model of the combined query.
- the combined object may list the tables, sources of the tables, and how the tables are input to an operation according to the index or view.
- the combined alternative is caching of a table corresponding to multiple queries
- the combined object may define a table, a source for the table, and an indication that the table is to be cached.
- the method 800 may further include adding 804 a representation of the object that is linked to the object to the interface 108 .
- the representation may be an icon, text describing the object, or the like.
- the representation may include a user interface element selectable by a user such that the system 200 responds to the selection by presenting information about the object, such as the parameters defining the combined alternative to be implemented by the object, the queries that were combined to obtain the combined alternative, an expected performance improvement from implementing the combined alternative (see step 614 ), identifiers of other users that generated one or more of the queries used to define the combined alternative, or other information.
- the representation may be added 804 to an interface viewable by each user that generated a query that contributed to creation of the combined object per the method 700 .
- the representation may further include a user interface element selectable by a user such that the system 200 responds to interaction with the user interface element to implement the combined alternative represented by the combined object. Accordingly, caching of a table, creating a combined view or index, or executing a query as defined by the combined object is performed 808 in response to detecting 806 selection of the user interface element.
- the combined object may further include data describing the performance metric that is estimated to be improved by implementing the combined object (see step 614 of the method 600 ).
- the method 900 may be executed by the system 200 when implementing a combined object either automatically or upon instruction of a user (see discussion of FIG. 8 ).
- the method 900 may include implementing 902 the combined alternative represented by a combined object (see step 808 ) and measuring and evaluating 904 metrics of queries implicated by the combined object: queries and operations of queries using cached tables, queries or operations of queries using an index or view, queries defined according to a combined query, etc.
- the manner in which these metrics are obtained may be as described above with respect to FIG. 2 .
- the method 900 may include evaluating 906 whether the metric (latency, consumption of computing resources, monetary cost) estimated to be improved by the combined object is in fact improved. For example, whether latency is improved by caching, whether computational resources are reduced by using a combined view or index, or the like. If not, the combined alternative may be removed 908 , i.e. the caching, combined view or index, combined query, or other function implemented according to a combined object may be removed 908 and queries will instead be processed individually without combination according to the combined obj ect.
- the metric latency, consumption of computing resources, monetary cost
- FIG. 10 illustrates a system 1000 that may be implemented within the environment 100 .
- a computing platform 102 a - 102 c that is a source S 1 -S 3 of data may implement a database 1002 that further includes metadata 1004 describing tables stored in the database 1002 .
- the data stored in the database 1002 may also have metrics 1006 associated therewith that may be stored in the computing platform 102 a - 102 c implementing the database 1002 or may be calculated based on the data stored in the database 1002 .
- the computer system implementing the engine 104 and unified access layer 106 may further implement an introspector 1008 that evaluates queries received from the interface 108 a, and possibly from other interfaces 108 b, 108 c that access the engine 104 by way of the unified access layer 106 .
- the introspector 1008 preemptively gathers information about databases 1002 on remote computing platforms 1002 a, 1002 b in response to queries that invoke access to that database 1002 .
- a database 1002 may include or have associated therewith a catalog that describes tables 1102 a- 1102 c in the database.
- Each table 1102 a - 1102 c may further include metadata that described the table 1102 a - 1102 c, such as the number of columns of each table 1102 a - 1102 c, number of rows in each table 1102 a - 1102 c, that data type of each column, a label of each column, or other information.
- the introspector 1008 may gather this information from the catalog 1100 and metadata 1104 a - 1104 c for use according to the methods described below.
- the introspector 1008 may further preemptively create managed views 1010 or indexes in response to this gathered information to facilitate subsequent queries to the database 1002 as described below.
- FIG. 12 illustrates a method 1200 that may be executed by the system 1000 .
- the method 1200 may include receiving 1202 an operation (e.g., an operation obtained from parsing a query as described above) referencing a table (“the subject table”) of a database 1002 on a source.
- the method 1200 is particularly advantageous where the source is on a different computing platform 102 a - 102 c than the system 1000 . However, this is not required.
- the operation may be executed 1204 with respect to the source.
- the engine 104 may issue a request to the source that the subject table and the database 1002 to perform the operation.
- the source returns a result of the operation to the engine 104 , which returns the result to the interface 108 a - 108 c that issued the query that included the operation through the unified access layer 106 .
- the introspector 1008 detects or monitors the operation and detects the reference to the subject table and the source. In response, the introspector 1008 retrieves 1206 the catalog 1100 of the database 1002 and the metadata 1104 a - 1104 c for all tables (other than the subject table if metadata for it is already obtained as a result of step 1204 ), referenced in the catalog 1100 from the source. The introspector 1008 uses the information to identify the tables 1102 a - 1102 c of the database 1002 and to retrieve 1208 the metadata 1104 a - 1104 c of the tables 1102 a - 1102 c from the source.
- the introspector 1008 may further compute 1210 or retrieve metrics for the tables 1102 a - 1102 c.
- the metrics may include statistics (mean, median, standard deviation, min, max) of the columns of the table, one or more histograms each of a column of values in the table, top K most frequent values (K being an integer), cardinality, samples of values from the table, or other data describing the table or columns of the table.
- the method 1200 may further include creating 1212 an index of the database 1002 according to the catalog 1100 and creating views (managed views 1010 ) for the tables 1102 a - 1102 c of the database 1002 using the catalog 1100 .
- this index and these views include indexes and views created without a user request to do so and prior to any access by the user of the tables referenced by the views.
- These views may then be represented in the interface 108 a to the user and these representations may be selected or otherwise accessed by the user to invoke their use by the engine 104 to process queries to the database 1002 .
- a system 1300 in the environment 100 may include a recommendation model builder 1302 that generates a recommendation model 1304 .
- the recommendation model builder 1302 may take as inputs some or all of the following: the remote metadata 1004 , the managed views 1010 , and a logical query history 1306 that is listing of queries received from one or more of the interfaces 108 a - 108 c.
- the logical query history may represent queries as strings as received from the user or as query models derived from query string received from a user.
- queries may also be generated by an application implementing an interface 108 a - 108 c.
- the logical query history of a query may include an actual query model of a query (including any pushing of a computation to a source or referencing of a cached data on a second source rather than a first source that is the source of record for the data) and may further include the original query model based on the query received from an interface 108 a - 108 c. Queries including these pairs of actual and original query models may then be used to suggest the creation of a new view with those specific operations against a single source that could then be cached, receive pushed computations, or have some other aspect of its operation modified as described herein.
- the logical query history references queries with respect to multiple computing platforms 102 a - 102 c and may reference queries with respect to multiple cloud-computing platforms.
- Multiple users may access the unified access layer 106 by way of multiple interfaces 108 a - 108 c such that the local query history further includes queries generated by multiple users, e.g. a team of users.
- the recommendation model builder 1302 may be used to improve the performance of collaborative use of a hybrid multi-cloud environment in accordance with the approach described below.
- the user interface 108 a or a component in the engine 104 or unified access layer 106 may use the recommendation model 1304 to output suggestions 1308 to the user through the user interface 108 a, the suggestions 1308 being suggested modifications to queries in order to improve performance, completeness, or other property of a result of a query.
- the user may also provide tuning parameters 1310 through the interface 108 a in response to the suggestions 1308 .
- the tuning parameters 1310 may then be implemented by the engine 104 and/or unified access layer 106 with respect to a query.
- FIG. 14 illustrates a method 1400 that may be executed by one or more computer systems implementing the system 1300 .
- the method 1400 may include monitoring 1402 queries, such as by evaluating the logical query history 1306 of one or more users with respect to one or more databases 1002 .
- the method 1400 may further include evaluating some or all of the structures of the queries, metrics of the queries (e.g., performance metrics as described above with respect to FIGS. 1 through 5B ), and statistics (see discussion of step 1210 ) of tables referenced by the queries.
- metrics of the queries e.g., performance metrics as described above with respect to FIGS. 1 through 5B
- statistics see discussion of step 1210
- the method 1400 may include generating 1406 the recommendation model 1304 according to the evaluation 1404 . Transformations may then be suggested 1408 according to the recommendation model 1304 and then implemented upon approval by the user. Alternatively, suggestions according to the recommendation model may be autonomously implemented. In particular, generating of views, materialized views, caches and the like as well as transformations of queries to take advantage of these may be performed autonomously by the unified access layer 106 . In yet another alternative, transformations may be implemented for a query followed by issuing a notification to the user requesting confirmation whether the transformation should be kept or not repeated for subsequent queries (for example where the transformation involves caching that could improve performance and can be easily reversed if not desired by a user). The recommendation model 1304 may be generated 1406 and suggest 1408 implementations of transformations according to some or all of the following examples.
- multiple transformations examples below may be implemented in a combined fashion and that some of the transformations in the examples below may also be implemented and subsequently pruned in response to changes in the query history or databases queried over time.
- multiple transformations may be generated for a single query (e.g., for different partial queries of a single query)
- that transformation may be removed (stop caching, remove view, etc.).
- a transformation may be pruned where a change in the policies 110 indicates that the transformation is no longer permitted (e.g., a change in available resources for a particular source).
- the query history may indicate one or more joins of table T 1 from source S 1 and table T 2 from source S 2 , where S 1 and S 2 could be the same source or different sources.
- Table T 1 has cardinality C 1
- table T 2 has cardinality C 2 .
- the computational cost of joining tables T 1 and T 2 may be estimated.
- the monetary cost of performing the join on source S 1 , S 2 , or a local computing platform P may then be estimated.
- the computational latency based on processing speed of sources S 1 , S 2 , and P for performing the join may be computed based on the computational cost.
- a network latency of various scenarios for performing the join may be computed based on the size of tables T 1 and T 2 , where the scenarios include: transfer T 1 from S 1 to S 2 and transfer result of join to P, transfer T 2 from S 2 to S 1 and transfer result of joint to P, transfer T 1 and T 2 from S 1 and S 2 , respectively, to P.
- a monetary cost of a scenario may also be computed, such as using the query performance model 204 discussed above. Based on these measures of performance (latency and cost), a scenario may be identified that has improved performance relative to other scenarios, e.g. has a score based on a sum or weighted sum of computational latency, network latency, and cost that is less than the score of another scenario calculated in the same manner.
- the recommendation model 1304 may be programmed at step 1408 to suggest implementing joins of tables T 1 and T 2 according to the scenario identified as having improved performance.
- the recommendation model 1304 may further be programmed to execute joins according to the identified scenario anytime there is a join of any table TA on source Si and any other table TB on source S 2 having a difference in cardinality approximating the difference in cardinality between tables T 1 and T 2 (e.g., the cardinality of TA being (X+ or ⁇ Y) % larger than the cardinality of TB, where X is the difference in cardinality of T 1 and T 2 and Y is a tolerance that is either a fixed value or a function of attributes of one or both of TA and TB.
- Example 2 The metadata from various tables on various sources may be evaluated to identify related data sets.
- the system 1300 may get foreign key information from the tables of remote sources and determine whether the foreign keys of tables on different sources overlap one another or the local keys of databases hosted by the local computing platform P executing the system 1300 . For example, if all of the keys of a table T 1 on source S 1 match the keys of a table T 2 on source S 2 , then tables T 1 and T 2 may be determined to be related such that the recommendation model 1304 may be programmed to suggest including T 2 in queries referencing T 1 , and vice versa.
- Example 3 Caching of a table may be recommended based on the query history, e.g., a frequently accessed table may be cached on a local computing platform P or a source other than the source of record for that table. Likewise, the query history may be evaluated to determine whether the cached table should be indexed.
- the index may be a tree index, bitmap index, or other type of index.
- a table may be a join of two or more tables with respect to a particular key such that indexing with respect to that key is advantageous.
- reads of a table are for specific values or are scans of the table such that indexing is not beneficial. Accordingly, indexing is not suggested in such instances.
- the query history reads or filtering is not performed predominantly (e.g., is below the threshold level of frequency) with respect to any particular key or column of a cached table such that indexing is omitted as not being advantageous.
- the recommendation model 1304 may determine whether a cached table should be partitioned in the cache. In particular, this may be done where the query history indicates that particular columns or rows of the cached table are used more than others or to facilitate parallel access of different portions of the cached table as indicated by the query history.
- the recommendation model 1304 may further suggest a frequency of updating of a cache based on frequency of writes to the table that is cached, e.g. an update frequency that is some frequency of the frequency of writes to the table.
- a timing of creating and updating a cache may be determined by the recommendation model 1304 according to the query history, e.g., performed during low usage periods or after performance of update cycles on the source of record for the cached data.
- Example 4 A view of one or more tables on one or more sources S 1 -S 3 may be suggested along with caching of a materialized version of this view on one of these sources, the premise P, or a different source. Performing of this action may be recommended by the recommendation model 1304 in response to detecting a number of queries by one or more users of a team that reference the data in the view and that meet a threshold criteria, e.g. where the number of queries per unit time and the volume of data referenced indicates a performance savings from caching in a materialized view based on the query performance model 204 .
- a cached materialized view may represent a “partial query,” e.g. a portion of the accessed tables and operations performed on the accessed tables that were included in one or more previous queries that may then be cached in a materialized view for subsequent use.
- Caching the tables as a materialized view rather than separately may be also be suggested/implemented to facilitate pushing down of computation to the source caching the materialized view.
- the recommendation model 1304 may further suggest/implement a materialized view corresponding to that operation in order to further improve execution of subqueries. Note that this is particularly helpful where the materialized view is of tables on different sources S 1 -S 3 such that the unified access layer's 106 virtualization enables both the detection of the possibility of the materialized view and its implementation.
- the recommendation model 1304 may suggest separating a view defined by a user or even a view that is automatically generated according to the methods disclosed herein. In particular, this may be done to improve performance when computation involving the tables referenced by the view is pushed down to a source of the tables or a location where the tables are cached.
- Example 6 The recommendation model 1304 caching of a view or a table is disabled where usage frequency of the view or table as indicated in the query history is below a predefined threshold (i.e. cold data) or relative to other views or tables referenced in the query history.
- the recommendation model 1304 may further suggest disabling of caching of a table where the updating of the table is performed at a frequency below some threshold or is low relative to other tables.
- Example 7 The recommendation model 1304 may suggest pushing an asymmetric join of a large table with a small table (some predefined percentage smaller than the large table) to the computing platform hosting the large table.
- the suggestion may be that the join be shifted to the local computing platform P and that the tables be cached on the local computing platform P.
- Example 8 The query history may indicate queries to multiple sources having tables with identical keys. Accordingly, the recommendation model 1304 may detect that and suggest expanding a query to one of these tables to include these tables from multiple sources having identical keys (i.e. blending the multiple tables).
- Example 9 Many business intelligence (BI) queries (e.g., from interface 108 c ) follow a similar pattern of joining tables in multiple dimension tables with a fact table to allow for grouping on, filtering by, or displaying dimension details. The BI tool will then typically execute these joins for every single query that it executes. Similar issues exist for machine learning (ML) model building, since the source data may have to be queried multiple times in order to generate a model or perform other functions of a ML algorithm.
- ML machine learning
- the recommendation model 1304 may detect such activity and suggest or implement transformations accordingly. For example, instead of computing a join for every single query from the BI tool or ML algorithm, the join can be materialized (denormalized) on the local computing platform P, a source of one or both of the tables joined, or another source to which the materialized view is pushed in order to improve performance of a subsequent computation according to the methods disclosed herein. This will improve performance of downstream queries that reference that join.
- the recommendation model 1304 may suggest a cached materialized view of this join on the local computing platform P.
- the view may be a complete join of all the original tables referenced in the preceding query. Since the tables are on different sources, the virtualization functionality of the unified access layer 106 may be used to retrieve the data from the different sources and create the materialized view.
- the unified access layer 106 may invoke creation and maintenance of the materialized view on that source, the local computing platform P, or a different source.
- materialized views may be combined where indicated by the query history. For example, where materialized views of multiple joins are all referenced in the same query one or more times, a materialized view of the multiple joins may be created, such as on the same source on which the individual materialized views of the joins are created.
- a materialized view may be pruned (deleted) in view of the query history, such as where a frequency of occurrence of queries including the join of a materialized view is below a threshold for a predetermined time period.
- the materialized views may be created and pruned in response to changes in system capabilities. For example, where more storage is added to a source, the number of materialized views may be increased. For example, for a first system capability, a certain amount of storage is available for materialized views and a first number of joins having the highest frequency of occurrence are selected and corresponding materialized views are created. If the amount of storage increases, a second number of j oins having the highest frequency of occurrence, which may include some joins omitted from the first number of joins, may be selected and corresponding materialized views may be created. When storage decreases, the reverse of this process occurs such that some joins in the second number will be pruned as occurring less frequently than those joins in the first number of joins.
- Example 10 Many BI queries follow similar patterns of computing aggregations (e.g., count, sum, avg) of one or more metrics on results filtered on one or more dimensions. These aggregations can be generalized either:
- the recommendation model 1304 will choose the above transformation to support “similar” queries where the filter condition may be changing, but the metric summarization is not. This helps to reduce the amount of data fetched from source systems and the amount of compute time. The resulting cache entries are also much smaller than the original dataset. This enables the system to cache many versions of these pre-aggregated queries
- datetime columns are special since most tools treat datetime dimensions dynamically: a single datetime column can be treated like a record of year, month, day, hour, minute, etc. (.i.e. a hierarchical dimension).
- a transformation of a query referencing a datetime column according to the approaches described above may be as follows:
- the above transformation allows the user to query with different time ranges while reducing the amount of data cached and reducing the compute time of the aggregations.
- Example 11 The approach of Examples 9 and 10 may be generalized to include more generalized improvements in the execution of subqueries. For example, BI queries and ML model building will use subqueries as part of their computations. With Push-Down of computations enabled, the incoming query may be broken up into smaller subqueries to be pushed down to the source systems. Whether to push down a subquery to a source may be determined according to the approaches described herein. In response to instances where a subquery is pushed down for data access and computation by a remote source, the recommendation model 1304 may suggest creating a view corresponding to that subquery on the remote source, which may be a materialized view.
- lookup/dimension data may exist on a different source S 1 -S 3 from data sets they can be joined with.
- lookup/dimension data may need to be cleaned and prepared by the unified access layer 106 before it can be joined to data that exists on another source.
- the recommendation model 1304 may therefore detect a join in such a scenario and suggest or implement pushing of the data for a remote table or results of view maintained by the unified access layer 106 onto the source system to enable a join. This enables more computation to be performed on a source system instead of on the local computing platform P.
- suggested transformations per step 1408 may be presented to a user for acceptance.
- an object including data defining the suggestion may be created and a representation of it displayed to a user. The user may then interact with the representation to view the suggestion and to invoke execution of the suggestion.
- Queries may then be received 1410 and executed with respect to views and cached data as defined at step 1408 .
- the unified access layer 106 may transform the query to change references to sources of record to references to cached tables and materialized views on the premise or a source other than the source of record.
- the method 1400 may further include augmenting 1412 the query history 1306 according to the queries received at step 1410 .
- any new tables referenced by a query at step 1408 may be used to augment the query history 1306 and metadata 1004 .
- This new data may then be processed according to the method 1200 , 1400 , or other methods disclosed herein.
- the illustrated method 1500 may be executed in the environment 100 , such as by one or more computing systems implementing the engine 104 , unified access layer 106 , and/or user interface 108 a.
- the method 1500 may include obtaining 1502 table data for one or more tables on one or more sources, including one or more remote sources.
- Step 1502 may include executing the method 1200 with respect to one or more queries.
- the method 1500 may further include collecting 1504 the query models of one or more queries.
- the query models may be hierarchies as described above with respect to FIGS. 5A and 5B in which each node is either a data access with respect to a source or an operation performed with respect to an output of another node (operation or data access).
- the method 1500 may further include generating a hash with respect to one or more tables discovered at step 1502 or referenced in queries at step 1504 .
- the hash for a table may be a function of some or all of the following attributes of the table: keys, number of columns, number of rows, statistics (e.g., some or all of the statistics described with respect to the method 1200 ), a query model or a portion of a query model that occurs in one or more queries that reference the table.
- query model one or more queries may have query models that reference a table, the query model used at step 1508 may include a portion of a lineage that both references the table and is common to multiple queries: e.g.
- the query models may be deemed common if the joins of the query models reference the table and a same second table.
- the hash from step 1506 may be according to a function such that the output of a first hash will be increasingly closer in value to a second hash where first data (steps 1502 , 1504 ) used to generate the first hash is increasingly similar to second data used to generate the second hash.
- the hash function may be a locality-sensitive hash function.
- the locality-sensitive hash function of a table may be a function of some or all of the following attributes of a table: features of a schema of the table, statistical values based on the table, the top K most frequent values (K being an integer), a histogram of one or more columns of the table, a sample of data from one or more rows and columns of the table, or the like.
- K being an integer
- the column and row positions of the data sampled may be selected based on statistical analysis of the tables being compared. For example, a machine learning model may be trained with pairs of tables labeled with their status (matching or non-matching) to determine statistical metrics that may be used to select sample for including in hash that are likely to be relevant and facilitate the successful identification of matching tables.
- the method 1500 may include identifying 1508 match candidates, i.e., sets of tables whose corresponding hashes from step 1506 are identical. For each set, the method 1500 may include determining 1510 whether the tables are in fact identical. For example, for a first table in a set and a second table in a set, the values from step 1502 for the first and second table may be compared. For example, the same keys, the same number of columns, same data type in each column, the same number of rows, the same statistics, the same histograms for the same columns, etc. In some embodiments, one or more heuristics based on the first and second tables may be compared. They may include heuristics based on continuous and/or categorical data (i.e., non-numerical data) contained in the first and second tables.
- categorical data i.e., non-numerical data
- step 1510 may include performing a value-by-value comparison for the tables, i.e. comparing a value at a row and column position in the first table to the value at the same row and column position in the second table. If all values are found to be identical, the tables may be verified 1510 to match.
- a policy constraint (e.g., from policies 110 ) may be imposed 1512 that restricts use of all but one of the tables.
- a table may be constrained to be stored on a particular source such that identifying the table on a different source or on the local computing platform P is a violation of this policy.
- the method 1500 may include suggesting using the copy of the table on the source of record in the place of identical cached copies of the table found elsewhere (e.g., local computing platform P).
- the method 1500 may include determining 1514 trust scores for the copies of the table to determine which is more trustworthy, e.g., more current, authentic, more secure, etc. For example, a trust score could be calculated for each copy of a table that is a function of whether the table is stored by a source of record, how frequently the copy of the table is updated (e.g., a cached copy), a level of security provided by a source hosting the copy of the table, or other parameters. These trust scores may then be presented to the user by way of the interface 108 a such that the user may select among the copies of the table to use for subsequent queries. Alternatively, the copy with the highest trust score may be automatically selected for processing subsequent queries. For example, references to non-selected copies may be replaced with references to the selected copy with the highest trust score when executing subsequent queries.
- a trust score could be calculated for each copy of a table that is a function of whether the table is stored by a source of record, how frequently the copy of the table is updated (e.g., a
- the method 1500 may further include identifying 1516 a performance improvement from substituting one table from a set of matching tables with another copy. In particular, this may include using the query performance model 204 .
- alternatives are evaluated to determine their performance (see discussion of steps 406 and 408 ). Accordingly, step 406 may include identifying alternatives in view of the known matches. For example, table T 1 is on source S 1 and table T 2 that is found to be a match is on source S 2 .
- an operation referencing table T 1 may have the following expanded list of alternatives: perform access on S 1 with processing on local computing platform P, perform access on P (cache T 1 from S 1 ) and processing on local computing platform P, perform access and processing on S 1 (provided S 1 has capacity to do so), perform access on S 2 with processing on local computing platform P, perform access on local computing platform P (cache T 1 from S 2 ) and processing on local computing platform P, and perform access and processing on S 2 (provided S 2 has capacity to do so).
- step 1516 may include automatically redirecting accesses from one table to a matching table on a different source in order to improve performance or providing suggestions to do so in the form of suggestions through the interface 108 a, which the user may then select to invoke implementation as described above with respect to FIG. 4 .
- a table T 1 on source S 1 may be determined to be part of a composite of table T 1 and T 2 on local computing platform P or another source S 2 .
- a performance improvement may include suggesting use of the composite.
- An accuracy improvement may include suggesting using the tables T 1 and T 2 from the source S 1 rather than the composite to avoid using non-current data.
- FIG. 16 illustrates a system 1600 that may execute within the environment 100 in order to process queries according to user preference regarding balancing of latency with respect to computational resources and monetary cost required to achieve that latency.
- the system 1600 may include a tuning module 1602 that executes on one or more computer systems executing the engine 104 and unified access layer 106 or some other computer system.
- the tuning module 1602 may access the query performance model 204 and policies 110 .
- the tuning module 1602 may further facilitate the definition and implementation of a push policy 1604 , cache policy 1606 , and scale policy 1608 .
- the push policy 1604 specifies parameters defining what computations should be pushed to the source hosting the data that is the subject of the computation.
- the cache policy 1606 specifies which data should be maintained in a cache on the local computing platform P or a cache on a different source S 1 -S 3 than the source of record for that data and may further specify parameters regarding writing back the cached data to the source from which it was retrieved or from updating the cached data from the source.
- the scale policy 1608 specifies when additional resources (computational or storage) will be acquired from a cloud computing platform 102 a or other scalable computing resource.
- the tuning module 1602 may define the policies 1604 - 1608 in response to user inputs 1610 received by way of the interface 108 a.
- the tuning module 1602 may likewise present an interface for a user to see implications of tuning and to provide the tuning parameters to the tuning module 1602 .
- FIG. 17 illustrates a method 1700 that may be executed by the tuning module 1600 .
- the method 1700 may include receiving 1702 constraints.
- the constraints may include some or all of the constraints included in the policies 110 , such as those specifying which source is able to perform computations, what data (e.g. which tables from which source) is permitted to be cached on the local computing platform P, how often data may be read into a cache, how long data in cache is valid, or the like.
- the constraints 1702 may further include other constraints such as a budgetary constraint specifying how much money, processing time, cache capacity, or other resource is allocated to a user, application, workflow, workspace, or other division to which a query may be assigned.
- the constraints may specify these constraints in general or may specify constraints for each source of a plurality of sources for a user, application, workflow, workspace, or other division to which a query may be assigned.
- the method 1700 may further include defining 1704 classifications of queries.
- the classifications may be applied to each query such that each query is processed according to constraints or policies 1604 - 1608 corresponding to its classification.
- Classifications may be associated with a particular user, application, workflow, workspace, or other division to which a query may be assigned. For example, some applications, e.g. data science interface 108 b may perform high volume computations without needing low latency. In contrast, a business intelligence tool 108 c may be expected to provide low latency.
- the classifications at step 1704 may be received from an operator or be pre-defined by an administrator or developer of the tuning module 1602 .
- Queries may also be classified based on attributes. For example, queries may be classified based on the volume of data referenced by the query, e.g. a first range of referenced volumes of data being assigned to a first classification, a second (different) range of referenced volumes being assigned to a second classification, and so on for any number of ranges. Queries may also be classified based on the complexity of the computation, e.g. a score may be assigned as a function of a number and type of computations and a score may then be classified to one of a plurality of classifications corresponding to a range including that score. In still other embodiments, a classification may be manually assigned to a query by a user.
- the method 1700 may further include defining 1706 expected performance for each classification.
- the expected performance may be specified in terms of a quality of service (QoS), service level agreement (SLA), e.g. a desired latency (average, maximum) for queries belonging to that classification.
- the expected performance may be specified in terms of data volume, e.g. A +V*L, where A is a constant, V is the data volume, and L is constant indicating a relationship between data volume and a desired latency.
- A is a constant
- V is the data volume
- L constant indicating a relationship between data volume and a desired latency.
- any other function may be used to determine a desired latency of a query in terms of data volume.
- the function for each classification may be different.
- the expected performance may be input by a user, such as by way of the interface 108 . For example, a slider, dial, or other input field may enable a user to specify the expected performance.
- the expected performance may be defined with respect to the constraints from step 1702 .
- a minimum latency may be estimated based on budgetary constraints, caching constraints, pushing constraints, or other constraints that limit available options and resources for reducing latency.
- a latency and consumed computing resources for a given scenario (caching, pushing computation, etc.) for executing a query with respect to a volume of data (e.g., an expected volume of data, maximum volume of data, or some other value) may be calculated.
- the cost of implementing that scenario may then be determined based on the computational resources consumed by that scenario.
- performance expectations may be specified only in relative terms, e.g., a classification may be assigned a high expected performance, medium expected performance, or low expected performance. Queries classified as having high expected performance will be performed with a scenario providing higher performance than queries classified as medium or low. Likewise, queries classified as having medium expected performance will be performed with a scenario providing higher performance than queries having the low expected performance. Any number of levels may be defined in this manner.
- the method 1700 may further include generating 1708 policies 1604 - 1608 for each classification according to the expected performances determined at step 1706 .
- performance enhancements such as caching, pushing computation to a source of data, or the like may be mandated by the policies 1604 , 1606 .
- scaling out to achieve the high relative latency or specified latency may be specified in the policy 1608 .
- the policies may prohibit caching or consumption of resources on sources that have monetary cost in order to avoid impacting classifications with a higher expected performance and avoid incurring costs for queries that are of low priority.
- generating 1708 a caching policy 1606 may include generating an indexing policy for cached data, i.e. whether or not cached data is to be indexed and what key or column is to be indexed.
- the type of the index may also be specified (tree, bit map).
- queries are executed according to the policies defined at step 1708 . This may include processing queries using the approach described below with respect to FIG. 18 .
- the performance of these queries (e.g. latency) may be measured 1710 and evaluated 1712 with respect to the expected performances for the queries, such as the expected performances defined at step 1706 . If the expected performance is not met, then the method 1700 may include adjusting 1714 the policies.
- the policies for that classification may be modified to include further performance enhancements, such as materialized views on the premise P corresponding to one or more operations (joins, sub-queries, groupings, aggregations, etc.) of the queries, caching of tables referenced by the queries on the premise P.
- performance enhancement may include shifting computation of these operations to a source of the tables referenced by the operations or creating a materialized view on the source of the tables, the materialized view implementing an operation included in the queries such as joins, sub-queries, groupings, aggregations, or the like.
- Performance enhancements may include any of the approaches described herein for transforming or improving execution of a query in non-heterogeneous data management and analytic computing platform.
- adjustments at step 1714 may include reversing or undoing one or more performance enhancements such as are mentioned in the previous paragraph in order to avoid unnecessary consumption of resources.
- FIG. 18 illustrates a method 1800 for processing queries according to policies 1604 - 1608 and classifications according to the method 1700 .
- the method 1800 may be executed by the tuning module 1600 .
- the method 1800 may include receiving 1802 a query and classifying 1804 the query. In particular, this may include determining a context that issued the query, e.g., a user account, application, workflow, workspace, or other division defining a context in which the query was issued. A classification for that context as defined according to the method 1700 may then be retrieved. As noted above, a classification may be based on attributes (data volume, complexity) of the query itself as noted above. Classification could also be performed manually, e.g., a user tagging a query with a classification. The method 1800 may further include retrieving 1806 policies 1604 - 1608 assigned to that classification.
- a context that issued the query e.g., a user account, application, workflow, workspace, or other division defining a context in which the query was issued.
- a classification for that context as defined according to the method 1700 may then be retrieved.
- a classification may be based on attributes (data volume, complexity) of the query itself as noted above. Classification could also
- the method 1800 may include performing 1808 an evaluation 1810 whether data referenced by the data access is or is specified to be cached 1810 .
- the engine 104 may cache data according to the cache policy 1606 .
- Caching may be performed prior to receiving a query or may be performed upon receiving a first query with respect to data that is permitted or required to be cached according to the cache policy 1606 , e.g. data from a source that is to be cached according to the cache policy 1606 .
- the data access may be performed 1812 with respect to the cached data.
- the cached data may also be updated as part of the access or in response to detecting a change in the version of the cached data in the source from which it was retrieved. Evaluating of whether a cached table is different from the version of the table at the source from which it is retrieved may be performed prior to data access, periodically, or in response to detecting writing of data to that table on the source.
- a hash may be calculated of a table as stored at the source and compared to a hash of that table as cached on the local computing platform P. If the hashes differ, the table as stored on the source may be reloaded into the cache on the local computing platform P.
- the hash function used in such cases may be a cryptographic hash rather than a locality-sensitive hash (e.g. MD5, SHA-1, or the like).
- the “hash” of the source and cache versions of the table may be substituted with another characteristic of these tables, such as a Max/Count (maximum value in the table divided by number of values in the table) for these tables.
- the table is cached without indexing where so specified in the policy 1606 and is cached with indexing according to a key or column specified in the policy 1606 when so specified by the policy 1606 .
- the method 1800 may include performing 1814 a remote data access from the source without caching.
- the method 1800 may include performing 1816 for each computational operation (join, aggregation, etc.), evaluating 1818 the push policy 1604 for the classification from step 1804 . If the push policy 1604 for the source of data referenced by the operation and the classification from step 1804 indicates that the computation should be pushed to the source of the data referenced by the operation, then this is performed at step 1822 . Otherwise, the operation is performed 1820 locally, e.g. on the local computing platform P.
- the method 1800 may further include evaluating whether operations performed by the remote computing platform are meeting 1824 latency targets.
- the scale out policy 1608 for the remote computing platform may specify a latency target. If this target is found 1824 not to be met by the processing of one or more operations, the method 1800 may include evaluating 1826 whether scaling out of the remote computing platform is permitted according to the scale out policy 1608 that applies to the remote computing platform and the classification from step 1804 . For example, this may include evaluating 1608 whether current computing resources allocated on the remote computing platform is at a maximum permitted amount or whether the latency meets a condition at which scale out is to be performed as specified by the scale out policy 1608 . If so, then the remote computing platform is scaled 1828 out and additional computing and/or storage resources are requested (e.g., purchased) for use.
- FIG. 19 is a block diagram illustrating an example computing device 1900 .
- Computing platforms 102 a - 102 c, interfaces 108 a - 108 c, or other computing devices for implementing systems and methods as described above may have some or all of the attributes of the illustrated computing device 1900 .
- a cluster of computing devices interconnected by a network may be used to implement any one or more components of the invention.
- Computing device 1900 may be used to perform various procedures, such as those discussed herein.
- Computing device 1900 can function as a server, a client, or any other computing entity.
- Computing device can perform various monitoring functions as discussed herein, and can execute one or more application programs, such as the application programs described herein.
- Computing device 1900 can be any of a wide variety of computing devices, such as a desktop computer, a notebook computer, a server computer, a handheld computer, tablet computer and the like.
- Computing device 1900 includes one or more processor(s) 1902 , one or more memory device(s) 1904 , one or more interface(s) 1906 , one or more mass storage device(s) 1908 , one or more Input/output (I/O) device(s) 1910 , and a display device 1930 all of which are coupled to a bus 1912 .
- Processor(s) 1902 include one or more processors or controllers that execute instructions stored in memory device(s) 1904 and/or mass storage device(s) 1908 .
- Processor(s) 1902 may also include various types of computer-readable media, such as cache memory.
- Memory device(s) 1904 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 1914 ) and/or nonvolatile memory (e.g., read-only memory (ROM) 1916 ). Memory device(s) 1904 may also include rewritable ROM, such as Flash memory.
- volatile memory e.g., random access memory (RAM) 1914
- ROM read-only memory
- Memory device(s) 1904 may also include rewritable ROM, such as Flash memory.
- Mass storage device(s) 1908 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown in FIG. 19 , a particular mass storage device is a hard disk drive 1924 . Various drives may also be included in mass storage device(s) 1908 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 1908 include removable media 1926 and/or non-removable media.
- I/O device(s) 1910 include various devices that allow data and/or other information to be input to or retrieved from computing device 1900 .
- Example I/O device(s) 1910 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.
- Display device 1930 includes any type of device capable of displaying information to one or more users of computing device 1900 .
- Examples of display device 1930 include a monitor, display terminal, video projection device, and the like.
- Interface(s) 1906 include various interfaces that allow computing device 1900 to interact with other systems, devices, or computing environments.
- Example interface(s) 1906 include any number of different network interfaces 1920 , such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet.
- Other interface(s) include user interface 1918 and peripheral device interface 1922 .
- the interface(s) 1906 may also include one or more user interface elements 1918 .
- the interface(s) 1906 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like.
- Bus 1912 allows processor(s) 1902 , memory device(s) 1904 , interface(s) 1906 , mass storage device(s) 1908 , and I/O device(s) 1910 to communicate with one another, as well as other devices or components coupled to bus 1912 .
- Bus 1912 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth.
- programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1900 , and are executed by processor(s) 1902 .
- the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware.
- one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Computer Hardware Design (AREA)
- Probability & Statistics with Applications (AREA)
- Fuzzy Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Computational Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Algebra (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Application Ser. No. 62/845,078 filed May 8, 2019 and entitled MANAGING DATABASE QUERY EXECUTION IN A HYBRID MULTI-CLOUD DATABASE ENVIRONMENT, which is hereby incorporated herein by reference in its entirety.
- Many database implementations exist, such as ORACLE, SQL, MYSQL, IBM DB2, SNOWFLAKE and many others. Likewise, there are many computing platforms that may store a database, such as a cloud storage platform: AMAZON WEB SERVICES (AWS), MICROSOFT AZURE, GOOGLE CLOUD, or the like. An enterprise may also have its own computing platform for storing and accessing a database. Data of an organization may be implemented using multiple database implementations on multiple computing platforms. It would be an advancement in the art to improve the function and performance in such a heterogeneous database environment.
- In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through use of the accompanying drawings, in which:
-
FIG. 1 is a schematic block diagram of an environment in which methods may be practiced in accordance with an embodiment of the present invention; -
FIG. 2 is a schematic block diagram of a system for generating and using a query performance model; -
FIG. 3A is a process flow diagram of a method for generating a query performance model in accordance with an embodiment of the present invention; -
FIG. 3B is a block diagram illustrating a machine learning approach to generating a query performance model in accordance with an embodiment of the present invention; -
FIG. 4 is a process flow diagram of a method for using a query performance model in accordance with an embodiment of the present invention; -
FIGS. 5A and 5B are diagrams illustrating different scenarios for processing a query in accordance with an embodiment of the present invention; -
FIG. 6 is a process flow diagram of a method for suggesting combinations and alternative processing for queries in accordance with an embodiment of the present invention; -
FIG. 7 is a process flow diagram of a method for generating combined alternatives to queries at different levels in accordance with an embodiment of the present invention; -
FIG. 8 is a process flow diagram of a method for implementing a combined alternative in accordance with an embodiment of the present invention; -
FIG. 9 is a process flow diagram of a method for evaluating combined alternatives in accordance with an embodiment of the present invention; -
FIG. 10 is a schematic block diagram of a system for automatically generating indexes and views of a database in accordance with an embodiment of the present invention; -
FIG. 11 is a diagram of data structures of a database in accordance with the prior art; -
FIG. 12 is a process flow diagram of a method for automatically generating indexes and views of a database in accordance with an embodiment of the present invention; -
FIG. 13 is a schematic block diagram of a system for recommending data sources in accordance with an embodiment of the present invention; -
FIG. 14 is a process flow diagram of a method for recommending data sources in accordance with an embodiment of the present invention; -
FIG. 15 is a process flow diagram of a method for identifying redundant data sources in accordance with an embodiment of the present invention; -
FIG. 16 is a schematic block diagram of a system for tuning implementation of queries according to desired performance in accordance with an embodiment of the present invention; -
FIG. 17 is a process flow diagram of a method for generating policies according to expected performance in accordance with an embodiment of the present invention; -
FIG. 18 is a process flow diagram of a method for implementing policies for achieving an expected performance in accordance with an embodiment of the present invention; and -
FIG. 19 is a schematic block diagram of a computer system in accordance with the prior art for implementing systems and methods in accordance with an embodiment of the present invention. - It will be readily understood that the components of the invention, as generally described and illustrated in the Figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description of the embodiments of the invention, as represented in the Figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of certain examples of presently contemplated embodiments in accordance with the invention. The presently described embodiments will be best understood by reference to the drawings, wherein like parts are designated by like numerals throughout.
- Embodiments in accordance with the invention may be embodied as an apparatus, method, or computer program product. Accordingly, the invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “module” or “system.” Furthermore, the invention may take the form of a computer program product embodied in any tangible medium of expression having computer-usable program code embodied in the medium.
- Any combination of one or more computer-usable or computer-readable media may be utilized. For example, a computer-readable medium may include one or more of a portable computer diskette, a hard disk, a random access memory (RAM) device, a read-only memory (ROM) device, an erasable programmable read-only memory (EPROM or Flash memory) device, a portable compact disc read-only memory (CDROM), an optical storage device, and a magnetic storage device. In selected embodiments, a computer-readable medium may comprise any non-transitory medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- Computer program code for carrying out operations of the invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++, or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages, and may also use descriptive or markup languages such as HTML, XML, JSON, and the like. The program code may execute entirely on a computer system as a stand-alone software package, on a stand-alone hardware unit, partly on a remote computer spaced some distance from the computer, or entirely on a remote computer or server. In the latter scenario, the remote computer may be connected to the computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- The invention is described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions or code. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer program instructions may also be stored in a non-transitory computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- Referring to
FIG. 1 , the methods disclosed in this application may be practiced in adatabase environment 100 having some or all of the illustrated components. In particular, the database environment may include databases implemented on multiple computing platforms 102 a-102 c, such as a firstcloud computing platform 102 a (AMAZON WEB SERVICES (AWS), MICROSOFT AZURE, GOOGLE CLOUD, etc.); adatalake 102 b implemented on a second cloud computing platform, on-premise computing equipment, or other computing platform; and an on-premise computing platform 102 c. The computing platform 102 a-102 c may have various capabilities, capacities, performances, and costs (monetary or computing) associated therewith. For example, some computing platforms 102 a-102 c may provide only basic database functions such as retrieval of data referenced in a request. Others may provide database computations such as table joins, aggregations, or other complex computations. - In particular, a
cloud computing platform 102 a may provide both storage and computational resources that may store a database, implement data access, and perform arbitrary computations with respect to a database. - A
datalake 102 b may be embodied as a distributed file system storing disparate datasets in multiple formats. Thedatalake 102 b may store data in various storage structures such as relational, object, non-relational, text, hierarchical, and the like. Adatalake 102 b may also provide some parallel computation facilities e.g., MapReduce, Impala, Spark SQL, Hive LLAP, or the like.Datalakes 102 b may have varying processing capabilities and latencies. The datasets of adatalake 102 b are typically large with limited pre-defined meta-data structures defining the datasets. The computing equipment implementing adatalake 102 b may be on-premise or in acloud computing platform 102 a. Theengine 104 may be programmed to interface with thedatalake 102 b and to process data in various formats stored in thedatalake 102 b. - An on-
premise computing platform 102 c may be a computing platform in which database software executes on computing hardware typically without a virtualization layer such as would be present in acloud computing platform 102 a. - Each of the computing platforms 102 a-102 c may implement databases according to various different implementations such as ORACLE, SQL, MYSQL, IBM DB2, SNOWFLAKE, or the like. The databases implemented may also include non-relational storage systems, such as S3, Hbase, and the like. The databases implemented by some or all of the computing platforms 102 a-102 c may be sources of record for the data stored thereon and therefore store the current and authoritative copies of the data stored thereon.
- In the illustrated embodiment, access to the various computing platforms is performed through a scalable query engine 104 (“
engine 104”) that implements interfaces to the various computing platforms 102 a-102 c and database implementations implemented on the computing platforms 102 a-102 c. Theengine 104 may further interface with the cloud computing platforms to scale out and scale in computing resources allocated to storage of a database or computation with respect to a database. Theengine 104 may do so by being programmed to use an application programming interface (API) or other interface exposed by a computing platform 102 a-102 c or database implementation along with programming to monitor usage and perform scaling out and scaling in based on usage. - The
environment 100 may further include aunified access layer 106. Theunified access layer 106 may provide an access point by which disparate user computing devices and users may access theengine 104. In particular, there are many different types of applications that consume database services for different purposes. Accordingly, theunified access layer 106 may be programmed to interface with such applications using APIs exposed by these applications. In this manner, access by multiple applications across an enterprise may be monitored and evaluated according to the methods disclosed herein in order to improve and/or manage performance of query processing. - The
unified access layer 106 may receive queries and provide response to queries to auser interface 108 a that may be a general purpose interface for receiving database queries, providing responses to database queries, and for presenting suggestions and representations of objects generated according to the methods disclosed herein as well as receiving user responses to these suggestions and objects. Theuser interface 108 a may implement individual user accounts or dashboards that enables each user to create unique views of the databases implemented by the computing platforms 102 a-102 c. However, as described in detail below, activities of multiple users may be aggregated in order to improve performance of queries and to automate generating of views that are of interest to a team of users as discussed below. - The
unified access layer 106 may further interface with adata science interface 108 b (e.g., notebook application) for running experiments and making inferences with respect to large amounts of data. Theunified access layer 106 may further interface with a business intelligence tool 108 c that performs queries in order to obtain performance indicators, reports, or other information from data describing operation of an enterprise or a target market. - The
unified access layer 106 may further access and implementpolicies 110 with respect to queries from the interfaces 108 a-108 c. Thesepolicies 110 may be identity based: certain users or organizations are permitted to access certain databases or tables within databases. Thesepolicies 110 may be resource based: certain users or organizations are permitted to user a certain amount of a particular resource such as storage space or computing time of a particular computing platform 102 a-102 c. These policies may be money based: operations by a certain user or organization may be permitted to perform queries from an external computing platform 102 a-102 b that cost up to a specified budget in a specified time period (e.g., month). Thepolicies 110 may further manage priorities, e.g. within a certain time period some users, organizations, projects (e.g., specific data spaces), or types of queries may be given higher priority than others in order to provide a desired quality of service (QoS) to customers. In another example, one interface 108 a-108 c may be given a higher priority than another in thepolicies 110. -
Policies 110 may also be capability based, i.e. certain computing platforms 102 a-102 c are capable of performing certain operations. For example,policies 110 may indicate which of the following a computing platform 102 a-102 c is capable of performing: execute a project, filter on a read, process a full query, implement parallel computation (e.g., MYSQL v. SNOWFLAKE). Other capabilities may also include ability to create remote tables and the ability to create remote materialized views. -
Policies 110 may be scale based, i.e. indicate an amount of data that may be stored or processed by a computing platform 102 a-102 c, an amount of computation per unit time that may be performed by a computing platform 102 a-102 c or some other representation of the computing capacity of the computing platform 102 a-102 c. - Policies may specify scheduling availability and constraints for a computing platform 102 a-102 c, e.g. an amount of resources available for queries of a user, organization, or interface 108 a-108 c that may be scheduled to execute within a given time period. For example, this may include definitions of blackout periods, busy periods, or low usage periods for a particular computing platform 102 a-102 c.
Policies 110 may include a user-specified preference among sources 102 a-102 c, such as a preferredcloud storage provider 102 a, a preferred cloud data base provider (SNOWFLAKE, REDSHIFT, etc.), or the like, for pushing computation to according to the methods described herein. - Data regarding performance of query execution may be gathered and used according to the methods disclosed herein. For example, a
monitoring module 112 a may detect both a query output by theunified access layer 106 and a response to that query thereby enabling themonitoring module 112 a to detect overall latency of a query and its response. - Monitoring modules 114 a-114 c may likewise each detect a query to a computing platform 102 a-102 c and its response and use this to determine its latency. Monitoring modules 114 a-114 c may monitor other aspects of a query, such as an amount of computing time used by the computing platform 102 a-102 c to process the query. For example, such a metric may be reported by the computing platform 102 a-102 c with the response to the query.
- Monitoring modules 114 a-114 c may monitor any aspect of resource consumption by a query such as bandwidth consumption. This may include network capacity consumed transmitting data between the
scalable query engine 104 and a source 102 a-102 c as part of processing of a query. Likewise, parallel processing of a query may result in data transfer locally within a particular sources 102 a-102 c. Accordingly, amounts of data transfer within a particular source resulting from a query may also be monitored and reported to the monitoring modules 114 a-114 c. In some implementations, network bandwidth is purchased such that bandwidth consumption may be transformed to a monetary cost according to the cost per unit of data transmitted. - In some embodiments, a source 102 a-102 c may record an amount of disk usage (amount of data written to and read from one or more disks) resulting from a query and report this to the monitoring modules 114 a-114 c.
- Monitoring modules 112, 114 a-114 c may further record other aspects of queries in addition to latency and resource consumption. For example, some or all of the following metrics may be recorded for a query:
- Timing (when received)
- Complexity (e.g., type and number of computations)
- Object usage (tables, views, remote calls, etc. referenced in the query or performed in response to the query)
- Monitoring modules 112, 114 a-114 c may further capture data describing data accessed by queries. Such data may include data demographics of data accessed, such as:
- Record and column of a database referenced by a query
- Data volume of a query (record count, byte sizing, column count, etc.)
- Distributions (histograms of one or more columns of tables, cardinality)
- Domains (column type and range information)
- Whether data is continuous or categorical
- Statistics (min, max, mean, etc.)
- Schema information (may be part of domain)
- Temporal information (how often data is updated, how often should the system update it (e.g., in a cache))
- Data from the monitoring modules 112, 114 a-114 c may be stored as
metrics 116 and used according to the methods described herein. Some or all of the metrics from the monitoring modules 112, 114 a-114 c as described above may be used for billing purposes, i.e. to determine resources consumed by a user in order to determine an amount owed by the use. - The
metrics 116 may also be used for capacity planning and reporting. For example, thescalable query engine 104 may suggest acquiring of additional computation or storage resources on acloud computing platform 102 a in response to the usage information. Thescalable query engine 104 may expand storage and computation resources allocated to thescalable query engine 104 in response to usage data. Usage data may also be summarized and reported to an administrator of a system for planning or other business purposes. Themetrics 116 may also be used to evaluate usage with respect to constraints on the resources reported by the metrics. In particular, there may be constraints on network bandwidth between thescalable query engine 104 and a source 102 a-102 c. Accordingly, in response to detecting consumption of bandwidth as reported in the metrics on a particular connection to a source 102 a-102 c reaching a capacity of the connection, additional transmission of queries over the connection may be paused or throttled. Likewise, in response to consumption of computation resources on a source 102 a-102 c approaching a limit on such resources, the processing of queries on that source 102 a-102 c may be paused or throttled. - As is apparent from the above description, the
environment 100 is a hybridmulti-cloud database environment 100. In particular, theenvironment 100 includes non-heterogeneous data management and analytic computing platforms 102 a-102 c (silos) as sources. In contrast, thescalable query engine 104 andunified access layer 106 that perform methods disclosed herein are not sources of record and manage the caching, movement, and accessing of data on the various computing platforms 102 a-102 c and performs, or invokes performing of, computation against data managed by other source systems of record - As described, below, the
unified access layer 106 provides a unified, personalized views of databases in the systems of records and facilitates analytics data access and collaboration between individuals and teams. Various approaches are described below. In particular, approaches are described for monitoring the entire set of resources on platforms 102 a-102 c and improving their utilization from an individual user's perspective. Approaches are further described for facilitating and accelerating implementation and execution of analytics and data science workloads. - The
unified access layer 106 and scalable query engine 104 (“the virtualization platform”) may be used to perform methods described herein. In particular, the virtualization platform may be used to create a virtualized access layer for individuals and small teams to collaboratively create personalized perspectives of data assets in any size enterprise (SeeFIGS. 6-10 and corresponding description). The virtualization platform need not be, and in many instances will not be, an enterprise's data system of record but works with data management systems of record (e.g., 102 a-102 c) irrespective of where the data management system resides. Data assets on these systems of record 102 a-102 c can be accessed and personalized regardless of where they reside including acloud computing platform 102 a, multiplecloud computing platforms 102 a, or a combination ofpremise computing equipment 102 c and one or morecloud computing platforms 102 a. The virtualization platform may virtualize access to a wide variety of data management systems implemented on any of these computing platforms 102 a-102 c that may include some or all of on-premise data warehouses, cloud data warehouses, document databases, relational databases, distributed and non-distributed file systems as well as object stores. - With many data management systems, the virtualization platform will automatically interrogate and integrate with the target systems catalog, and management interfaces (see discussion of
FIGS. 11 and 12 , below). In addition to virtualized access, the virtualization platform may provide computational elasticity, workload management, and many performances enhancing and optimizing techniques (see discussion ofFIGS. 13 and 14 , below). The virtualization platform may also integrate into an existing enterprise eco-system. The virtualization platform may also integrate into common security layers of various data management systems that may be implemented on the computing platforms 102 a-102 c. The virtualization platform may also account for different security of computing platforms 102 a-102 c as recorded in thepolicies 110, e.g., prevent data from a source being cached on another source lacking appropriate data protections. The virtualization platform may also integrate with common collaboration applications and project management tools. - The virtualization platform may adapt and make intelligent choices. For example, it may work to migrate workloads onto an available platform based on user criteria with respect to capability, time, cost, available resources, and service level agreements (see discussion of
FIGS. 16-18 , below). In addition to choosing a computing platform 102 a-102 c from a computational perspective, the virtualization platform may infer information with respect to data relationships, usage patterns, lineage, etc. of received queries. These inferences allow the virtualization platform to assist the process of identifying and personalizing by means of suggestions (or enactment) of alternative usage patterns, optimizations, and sources (see discussion ofFIGS. 6-9 andFIGS. 13 and 14 , below). -
FIG. 2 illustrates asystem 200 for usingmetrics 116 gathered as described above. In particular, themetrics 116 andpolicies 110 may be processed by amodel builder 202 in order to generate aquery performance model 204. Thesystem 200 may be implemented on one or more computing devices implementing theengine 104 andunified access layer 106. Thequery performance model 204 relates attributes of a query to its performance in terms of latency, computing resources consumed (memory used, processor time/cycles used, amount of data transmitted over a network), and possibly monetary cost on a given computing platform). Attributes of a query may include a data volume referenced (e.g., size of tables referenced by the query) and complexity (number and types of computations included in the query such as aggregations). - Note that there may be many metrics for the same query. In particular, a query from an interface 108 a-108 c has its attributes and its overall performance in terms of total latency, total computing resources consumed, and possibly a total monetary cost. That same query may result in multiple sub-queries to multiple computing platforms 102 a-102 c, each sub-query having metrics relating its own attributes to latency, consumed computing resources, and possibly a monetary cost.
- Accordingly, the
model builder 202 may include a machine learning model or other algorithm that generate one or more models relating the attributes of a query to one or more metrics of performance (latency, computing resource, monetary cost). For example, there may be an overall model obtained by evaluating the overall metrics of queries. There may be individual models for each computing platform 102 a-102 c computed for queries and corresponding metrics for the each computing platform 102 a-102 c. Themodel builder 202 may operate “offline,” such as at off-peak times or on a different computing resource than the unified access layer in order to avoid impacting production processing of queries. - The
model builder 202 or some other module may use thequery performance model 204 to generatesuggestions 206 that may be presented to a user by way of the user interface 108.Suggestions 206 may be filtered according to thepolicies 110 that may limit which suggestions are permitted to be implemented. As discussed in detail below with respect toFIGS. 3 through 5B , thesuggestions 206 may indicate ways to improve an aspect of the performance (latency, computation resources, monetary cost) of a query based on thequery performance model 204. Theuser interface 108 a may further includetuning parameters 208 from the user in response to the suggestions that modify one or more aspects of processing of a query in order to improve an aspect of performance as described below with respect toFIGS. 3 through 5B . Note also that tuning 208 based on thequery performance model 204 may also be performed according to the approach described below with respect toFIGS. 16 through 18 . -
FIG. 3A illustrates amethod 300 that may be executed in thedatabase environment 100 using thesystem 200. Themethod 300 may include receiving 302 a query by theunified access layer 106 from an interface 108 a-108 c. One or more attributes of the query may be logged 304, such as time of receipt, a metric of complexity (number of computations, computational complexity of individual computations), a volume of data (size of table, number of rows, number of columns, etc.) referenced by the query, or other attribute of the query. - The query may then be parsed 306 into individual data access and computation operations and these operations may be executed 308 with respect to sources of data on one or more of the computing platforms 102 a-102 c. In particular, a computing platform 102 a-102 c may host a database that is the source of record for data referenced by the query. Note that
step 308 may be performed according to an A/B test 310. As will be described below, various approaches may be used to perform data access and computation. In particular, data access may include retrieving (“fetching”) and caching data from a remote computing platform 102 a-102 b on apremise computing platform 102 a. Alternatively, data access may be performed without caching. Likewise, a computation with respect to data may be performed on thepremise computing platform 102 c or may be moved (“pushed”) to a cloud computing platform 102 a-102 b that stores the data with respect to which the computation is being performed. Accordingly, a computation of a query may be pushed to a source or data for a query may be fetched from a source based on its attributes according to an A/B experimental plan such that the performance metrics for queries having various attributes may be determined for each data access and computation type. - The
method 300 may include performing 312 for each operation fromstep 308, logging 314 one or more values such as latency, computing resources consumed, data volume, operation type (data access, join, aggregation operation, etc.), or the like. Latency for an operation may include an elapsed time between transmitting of a query to a source 102 a-102 c and receiving a response to the query, e.g. theengine 104 acknowledges completion of the query to the source 102 a-102 c. - The
method 300 may further include logging 316 a time of completion of the query received atstep 302, e.g. a time at which all of the operations fromstep 306 are executed 308 and an overall response to the query is returned to theunified access layer 106. Other metrics described above (see description of monitoring modules 112, 114 a-114 c) that are captured for a query may be collected atstep 316. - The
method 300 may include creating 318 records in themetrics 116 for the data logged atsteps step 304, the latency fromstep 316, and computing resources that are an aggregation of the computing resources logged 314 for the individual operations of the query. For an operation, the record may include the attributes of the operation, latency, and computation resources consumed as logged atstep 314 for the operation. - In some embodiments, the records of
step 318 may be augmented 320, 322 with additional data either before or aftercreation 318. For example, a monetary cost of an operation may be computed based on consumed computing resources and a known cost function. This monetary cost may be added 320 to the record of the operation and the aggregate monetary cost of operations of an overall query may be added 320 to the record of the overall query. - In some embodiments, the
policy data 110 may impose additional cost on an operation. For example, a peak time period may be defined in which the cost of operations is artificially inflated in order to bias operations toward execution outside of the peak time period or to penalize operations that are of lower priority. For example, for a particular source 102 a-102 c records of operations during the peak time period may be assigned 322 an additional “cost” value based on computational resources consumed and independent of monetary cost that will be used to train themodel builder 202 to balance usage of that source 102 a-102 c during the peak time period. - The
method 300 may then include processing 324 the metric records by themodel builder 202 in order to create thequery performance model 204. As noted above, this may include creating an overall model and models for individual computing platforms 102 a-102 c that relate attributes of a query to one or more performance metrics (latency, computational resources, monetary cost). Step 324 may include using any statistical algorithm, machine learning algorithm, or artificial intelligence algorithm (e.g., neural network, deep neural network, convolution neural network, etc.) known in the art. In particular, each record may be viewed as an item of training data including the query attributes as an input and the performance metrics as desired outputs. The machine learning algorithm may then be trained to produce the desired output for a given input according to any approach known in the art. -
FIG. 3B illustrates anexample approach 326 for generating 324 thequery performance model 204 using machine learning.Training data entries 328 may be generated using the data assembled as described above with respect toFIG. 3A . For example, eachentry 328 may correspond to a query. For a query a query feature set 330 may be assembled that includes features such as volume of data referenced by the query, the metric of query complexity, which interface 108 a-108 c generated the query, which computing platform 102 a-102 c performed an operation in the query, whether data referenced by the query was cached and where it was cached, whether computation was pushed to a particular computing platform 102 a-102 c, or other attributes of the query. Thetraining data entry 328 may further include policy feature set 332 that includes data describing policies in effect when the query was executed, such as the policy data added atstep 322. For example, some or all of the computing platforms 102 a-102 c that are used to execute the query may have policies that reflect resource limitations (computing time, memory, data volume) that are either artificially imposed or reflect physical limits of available computing and storage hardware. Accordingly, these policies may affect the latency or other metric of performance of the query and may be included in thetraining data entry 328. - The
training data entry 328 may include thequery metrics 334 or data derived from thequery metrics 334, e.g. the metrics obtained for the query at some or all ofstep entry 328 and thequery metrics 334 are a desired output that the machine learning model is trained to produce. - In some embodiments, training a machine learning model according to the
training data entries 328 may include training multiple machine learning models. For example,training data entries 328 may be divided into sets of entries by asampling algorithm 340. The sampling algorithm may be any sampling algorithm known in the art. - Each set of entries may then be trained by a machine learning algorithm 342 a-342 c. The result of this is a set of machine learning models 344 a-344 c that are each trained to predict
query metrics 334 based on input feature sets 330, 332 for the set oftraining data entries 328 that were used to train it. - One of the models 344 a-344 c or a combination thereof may be selected as a selected
model 346. Specifically, a given input query feature set and policy feature set of a query may be input to the models 344 a-344 c each of which outputs a prediction for one or more corresponding query metrics that would result from the query feature set and policy feature set. Aselection algorithm 348 may be used to either (a) identify a most accurate of the models 344 a-344 c as the selectedmodel 346 or (b) generate amodel 346 for selecting among the models 344 a-344 c for given input feature sets 330, 332. - Where (a) is implemented, selecting the most accurate model 344 a-344 c may include processing a set of data entries that is different from the sets of entries used to train the models 344 a-344 c (“the validation set”). The model 344 a-344 c with the most accurate predictions (corresponding most closely to the
actual metrics 334 of the validation set) may then be selected as the selectedmodel 346. Alternatively, multiple models 344 a-344 c may be selected for use in combination. For example, an average or weighted average of multiple models 344 a-344 c. Other aggregations are possible, e.g. selection of the output of the multiple models 344 a-344 c that is most frequent among the outputs of the multiple models for a given input. - Where (b) is implemented, following training of the models 344 a-344 c, the
machine learning algorithm 348 may train themodel 346. Training entries for themachine learning algorithm 348 may each include outputs of the models 344 a-344 c for a query feature set 330 and policy feature set 332 of a query as inputs and thequery metrics 334 for that query as a desired output. Note that same or differenttraining data entries 328 may be used for training theselection model 346 as were used to train the models 344 a-344 c. Themodel 346 therefore is used to select among outputs of the models 344 a-344 c in production. - The machine learning models 344 a-344 c and
model 346 may be embodied as any machine learning algorithm known in the art such as decision tree, clustering, Bayesian network, genetic, deep neural network, convolution neural network, or the like. - Note that
FIG. 3B is exemplary only and other approaches may be used. In particular,FIG. 3B illustrates an example of boosting in order to improve the performance of a machine learning algorithm. However, any boosting algorithm known in the art may be used. In some embodiments, rather than clusteringtraining data entries 328 to obtain the models 344 a-344 c, the models 344 a-344 c are models with different configurations for hyper parameters that are each separately trained with theselection model 346 being trained to select or combine the outputs of the models 344 a-344 c as described above. -
FIG. 4 andFIGS. 5A and 5B illustrate amethod 400 for using aquery performance model 204 to process individual queries. In particular,FIGS. 5A and 5B illustrate various query plans illustrating where data access and computation may be performed for a query having a given query model. Themethod 400 may include receiving 402 a query and parsing 404 the query into individual operations (data access and computation) that constitute the query model of the query. This is shown inFIG. 5A , which shows a query composed of computational operation A that operates with respect to the results of computational operations B and C. Operation B processes results of a data access D with respect to a data source S1 and a data access E with respect to a data source S2. Operation C processes result of a data access F with respect to a data source S3. In this example, computations A, B, and C are performed by a local computing platform P. Computations may include some or all of joins, sub-queries, groupings, aggregations, or other database computations. - In this example, sources S1 and S3 may be remote from the local computing platform P. The sources S1-S3 and local computing platform P may be embodied as any of the computing platforms 102 a-102 c. This is just an example, and operations may be arranged in a hierarchy of any number of levels with respect to any number of data accesses. Local computing platform P may be any of the computing platform 102 a-102 c that is executing one or both of the
engine 104 andunified access layer 106. Sources S1-S3 refer to sources of record for databases or other types of data that are on a different computing platform 102 a-102 c than the local computing platform P such that decisions may be made regarding pushing of computation to the source S1-S3 and fetching of data from the source S1-S3 to the local computing platform P as described below. - The
method 400 may include identifying 406 possible alternative sources for performing each operation. For example,FIG. 5A shows one set of alternatives with computation being performed on the premise P and data accesses being performed on the sources S1-S3 storing the data being accessed.FIG. 5B shows other alternatives. For example, data access D may be performed by first caching the data on the premise P. A computation B with respect to data from data access E of data on source S2 may be performed with computation on premise P with data access remaining on the source S2. Computation C with respect to data on source S3 may be shifted to source S3 such that the computation C and the data access F for data processed according to computation C are all performed on the source S3. In another scenario, data may be transferred from a first source to a cache on a second source (e.g., S2 to S3) that is not the source of record for the data such that one or both of computation and data access are performed on the second source. - Possible alternatives may be filtered with respect to
policies 110. In particular, some data may be prohibited from being cached on local computing platform P or on a different source than its current source due to privacy or other concerns. Some sources S1-S3 may have limited computational capacity and may be used only for data access. Some sources S1-S3 may be unavailable for performing computation at peak times. Any other policy may limit possible alternatives as determined by a human operator or limitations of a particular source S1-S3. In some embodiments, alternatives and their predicted metrics (see step 408) are determined without first filtering. Accordingly, where an alternative has high performance relative to others (e.g., highest performance or top N highest performance where N is an integer) but is prohibited by policy it may still be suggested along with a suggestion to change the policy in order to permit - For the alternatives that are identified and not filtered out at
step 406, themethod 400 may include determining 408 metrics for each alternative according to thequery performance model 204. In particular, an alternative may be represented as an operation having various attributes and a source that may be used to perform that operation. Note that where an alternative is a data access performed by first caching on a different source (local computing platform P or a different source S1-S3), the data transfer from of the source to a different source may be performed prior to performing the operation. In particular, where the alternative is implemented, caching on the local computing platform P is performed and subsequent queries referencing data that is cached are then executed with respect to the cached data. Accordingly, the latency of data transfer to the cache may be omitted when estimating metrics for the alternative atstep 408. - Determining 408 the metrics may be performed by applying the
query performance model 204 to the attributes of the operation (data volume, computation type) and the configuration of the alternative (where performed). In particular, for a given source P, S1-S3 for performing an operation, the model corresponding to that source may be applied to the attributes of the operation to determine estimated performance (latency, computation resources consumed, monetary cost) of performing that operation using that source. Where the operation includes data access including first caching on the premise from a source S1-S3, the model may be a model trained to relate performance to that type of operation, i.e. caching to the local computing platform P from that specific source S1-S3 by evaluating previous data accesses performed in that same manner for the local computing platform P and that specific source S1-S3. - Step 408 may include inputting a feature set for the alternative to a machine learning model trained as described above with respect to
FIG. 3B . In addition,current policies 110 may be input as a policy feature set to this machine learning model as described above with respect toFIG. 3B . The machine learning model then outputs a predicted performance for that alternative. Note that where the machine learning model is trained with a policy feature set, the predictions may take into account changes in policy (e.g., increasing or decreasing of computing or storage resources such as adding more computing nodes). - The
method 400 may further include implementing 410 or suggesting implementation of an alternative for an operation that has improved estimated performance relative to other alternatives for that operation. For example, a default query plan for a query may be as shown inFIG. 5A with data accesses performed at sources S1-S3 and all computation performed on the local computing platform P. If caching for a data access on the local computing platform P or on a different source S1-S3 is determined to provide improved performance for a data access operation, then an alternative query plan (e.g., seeFIG. 5B ) may be automatically implemented or suggested to the user, such as by way of a notification in theinterface 108 a. If pushing of a computation to a source S1-S3 is determined to provide improved performance, then this may be automatically implemented or suggested to the user, such as by way of a notification in theinterface 108 a. In the event that the user accepts a suggestion, such as by means of selecting the notification in the interface, then the alternative represented by the suggestion may then be implemented: e.g., data may be retrieved and cached or computation may be pushed according to the suggestion. - Note that there are three types of alternatives that may be suggested and implemented: (1) those that involve generating a query plan for a query that will improve its performance by pushing an operation to a source or by performing the operation on the local computing platform P, (2) those that involve caching of data on the local computing platform P or on a different source than the source of record for that data, (3) those that involve generating views and structural changes to views based on analysis of data access patterns by a single user or group of users.
- Alternatives of type (1) may be implemented by the
scalable query engine 104 that invokes execution of an operation on the local computing platform P or a remote source in order to achieve improved performance (lower latency, lower consumption of computing resources, lower cost). In particular, alternatives of type (1) may be performed in combination with other known query optimization techniques (e.g., SPARK) in order to improve the performance of queries. Accordingly, alternatives of type (1) may be performed by a subsystem of theengine 104 that where data and computations should be located in order to improve performance and to meet user's expected performance/resource targets (see discussion ofFIGS. 16 through 18 , below). As noted throughout,policies 110 may limit or direct out alternatives of any of types (1), (2), and (3) are performed and may result in execution of queries, implementation of caching, and structuring of views that is sub-optimal. - Alternatives of type (2) may be implemented by the
unified access layer 106 that invokes caching of data on the premise P or a source other than the source of record or the cached data. This may include generating materialized views and maintaining coherency of cached data and materialized views created according to the methods disclosed herein. Theunified access layer 106 may also translate queries to reference the location of the cached data rather than a source of record. Translation of queries may also be performed in order to better use the computing platforms 102 a-102 c and other computing resources of the premise P. Alternatives of type (3) may also be implemented by theunified access layer 106, which can generate views spanning multiple computing platforms 102 a-102 c due to its virtualization of access of databases on them. - Note that the
unified access layer 106 may influence the manner in which thescalable query engine 104 improves the performance of queries. For example, theunified access layer 106 may inform thescalable query engine 104 of the availability of cached versions of data, views, materialized views, or other alternative representations of data generated according to the methods disclosed herein. In some embodiments, queries passed to thescalable query engine 104 may be annotated with “hints” that indicate which physical representation of data (source of record, local cache, remote cache, materialized view, etc.) to use when executing the query in order to improve performance. -
FIGS. 4 and 5A and 5B further illustrate the hybrid multi-cloud approach that may be implemented using theunified access layer 106 andengine 104. In particular, queries may span multiple cloud computing platforms and a local computing platform in a hybrid fashion that is transparent to the user. There may be multiple cloud-computing platforms from multiple providers that are accessed in this manner, e.g., AWS, GCP (GOOGLE Cloud Platform), AZURE, and the like. -
FIG. 6 illustrates amethod 600 that may be executed by thesystem 200. Themethod 600 may be used to (a) streamline usage of an individual user that may include redundancies and/or (b) facilitate collaborative usage by multiple users in order to improve performance. In the description below, processing is described with respect to overlapping queries of multiple users. Overlap among the queries of an individual user may be identified and used to improve performance in the same manner. - The
method 600 may include receiving 602 queries, such as by way of any of the interfaces 108 a-108 c from a plurality of users and evaluating 604 the structure of the queries. In particular, a query may be represented as a hierarchy or query model (seeFIG. 5A ) including a plurality of nodes such that each node is either a data access or a computation. A node may have child nodes representing sources of data processed by that node. Accordingly, a node may take as inputs a data access of a child node or a result of computation by a child node. - The queries may be evaluated 606 to identify source calls made as a result of the query. In particular, the source S1-S3 called, the tables accessed, the columns of tables accessed, or other data regarding data accessed as a result of a query may also be determined at
step 606. - The
method 600 may further include identifying 608 overlap. Overlap may be identified in the structure, e.g. the query model of a first query has identical nodes (same computation) and relationships among nodes as a second query, or in the source calls, e.g., the source calls of a first query reference a same table, column of a table, or other data as a second query. Note that complete overlap is not required but rather some partial overlap may also be identified atstep 608. - The
method 600 may further include generating 610 a combined alternative according to the overlap. Generating a combined alternative may be performed according to a degree of overlap as shown by themethod 700 ofFIG. 7 . - Queries may be found 702 to overlap at the structure level in that they have overlapping query models: both overlapping sources (same tables from same source) and computation (same nodes and relationships among nodes). In such cases, a combined alternative may be created 704 as a query object defining the overlapping portion of the queries.
- Queries may be found 706 to overlap at the operation level, such as having a join of tables in common or a common aggregation of one or more common tables from common sources. Accordingly, the combined alternative may be created 708 as an index or view object that includes that operation (join or aggregation) of the one or more common tables. Note that the query object may be an expanded query that includes first dimensions of a table included in a first query that overlaps second dimensions of a second query that does not include the first dimensions.
- Queries found 710 to overlap at the source level (overlapping access to common table of a common source) may result in a combined alternative created 712 as a materialized (cached) view of all columns of the table referenced by the overlapping queries.
- Overlapping queries that access and possibly perform computation with respect to multiple tables across multiple sources S1-S3 can result in a combined alternative according to 704 and 708 that is a query object or view object referencing the multiple sources and including any overlapping computation.
- Note that a combined alternative for multiple users may be based on interactive feedback received from the multiple users. For example, the multiple users may collaborate by way of the
interface 108 a in order to propose combined alternatives that may then be implemented in the same manner as automatically-generated combined alternatives according to the methods disclosed herein. - The
method 600 may further include evaluating 612 the combined alternatives with respect to thequery performance model 204. In particular, estimated (per model 204) or measured performances of the individual queries may be compared to an estimated performance of the combined alternative. In particular, the overlap among the queries may include one or more overlapping operations including data accesses from sources and computation. Accordingly, the measured or estimated performance of performing the overlapping operations separately may be compared to the estimated performance of performing the combined alternative. In particular, the combined alternative has attributes indicating the data volume and operation type of the combined alternative. These attributes may be processed by thequery performance model 204 to determine an estimated performance. - The
method 600 may include evaluating 614 whether the performance of the combined alternative is better (lower latency, lower consumption of computational resources, lower monetary cost) then the combined performance of the separate overlapping operations (e.g. sum of latencies, sum of consumed computational resources, sum of monetary costs). If so, then the combined alternative may be determined to be an improvement and may be suggested 616 to a user or automatically implemented. The displayed suggestion may display the expected improvement from the combined alternative, e.g., a change in value of one or more metrics determined to be improved atstep 614. An example of howstep 616 may be performed is described below with respect toFIG. 8 . - Referring to
FIG. 8 , themethod 800 may be executed by thesystem 200 in response to identifying a combined alternative providing a performance benefit as described above with respect toFIG. 7 . Themethod 800 may include creating 802 a combined object according to the combined alternative. Note that this object is not an implementation of the combined alternative but rather an object containing parameters sufficient to define the combined alternative such that it can be implemented later. Accordingly, where the combined alternative is a result of combining queries at the query level, the object may include the query itself defining operations performed and with respect to what tables in what sources. The object may, for example, include the query model of the combined query. Where the combined alternative is an index or view (see step 708), the combined object may list the tables, sources of the tables, and how the tables are input to an operation according to the index or view. Where the combined alternative is caching of a table corresponding to multiple queries, the combined object may define a table, a source for the table, and an indication that the table is to be cached. - The
method 800 may further include adding 804 a representation of the object that is linked to the object to the interface 108. The representation may be an icon, text describing the object, or the like. The representation may include a user interface element selectable by a user such that thesystem 200 responds to the selection by presenting information about the object, such as the parameters defining the combined alternative to be implemented by the object, the queries that were combined to obtain the combined alternative, an expected performance improvement from implementing the combined alternative (see step 614), identifiers of other users that generated one or more of the queries used to define the combined alternative, or other information. The representation may be added 804 to an interface viewable by each user that generated a query that contributed to creation of the combined object per themethod 700. - The representation may further include a user interface element selectable by a user such that the
system 200 responds to interaction with the user interface element to implement the combined alternative represented by the combined object. Accordingly, caching of a table, creating a combined view or index, or executing a query as defined by the combined object is performed 808 in response to detecting 806 selection of the user interface element. - Referring to
FIG. 9 , in some embodiments, the combined object may further include data describing the performance metric that is estimated to be improved by implementing the combined object (seestep 614 of the method 600). Themethod 900 may be executed by thesystem 200 when implementing a combined object either automatically or upon instruction of a user (see discussion ofFIG. 8 ). Themethod 900 may include implementing 902 the combined alternative represented by a combined object (see step 808) and measuring and evaluating 904 metrics of queries implicated by the combined object: queries and operations of queries using cached tables, queries or operations of queries using an index or view, queries defined according to a combined query, etc. The manner in which these metrics are obtained may be as described above with respect toFIG. 2 . - The
method 900 may include evaluating 906 whether the metric (latency, consumption of computing resources, monetary cost) estimated to be improved by the combined object is in fact improved. For example, whether latency is improved by caching, whether computational resources are reduced by using a combined view or index, or the like. If not, the combined alternative may be removed 908, i.e. the caching, combined view or index, combined query, or other function implemented according to a combined object may be removed 908 and queries will instead be processed individually without combination according to the combined obj ect. -
FIG. 10 illustrates asystem 1000 that may be implemented within theenvironment 100. In particular, a computing platform 102 a-102 c that is a source S1-S3 of data may implement adatabase 1002 that further includesmetadata 1004 describing tables stored in thedatabase 1002. The data stored in thedatabase 1002 may also havemetrics 1006 associated therewith that may be stored in the computing platform 102 a-102 c implementing thedatabase 1002 or may be calculated based on the data stored in thedatabase 1002. - The computer system implementing the
engine 104 andunified access layer 106, or some other computer system, may further implement anintrospector 1008 that evaluates queries received from theinterface 108 a, and possibly fromother interfaces 108 b, 108 c that access theengine 104 by way of theunified access layer 106. - The
introspector 1008 preemptively gathers information aboutdatabases 1002 on remote computing platforms 1002 a, 1002 b in response to queries that invoke access to thatdatabase 1002. For example, as shown inFIG. 11 , adatabase 1002 may include or have associated therewith a catalog that describes tables 1102a-1102 c in the database. Each table 1102 a-1102 c may further include metadata that described the table 1102 a-1102 c, such as the number of columns of each table 1102 a-1102 c, number of rows in each table 1102 a-1102 c, that data type of each column, a label of each column, or other information. Theintrospector 1008 may gather this information from thecatalog 1100 and metadata 1104 a-1104 c for use according to the methods described below. - The
introspector 1008 may further preemptively create managedviews 1010 or indexes in response to this gathered information to facilitate subsequent queries to thedatabase 1002 as described below. -
FIG. 12 illustrates amethod 1200 that may be executed by thesystem 1000. Themethod 1200 may include receiving 1202 an operation (e.g., an operation obtained from parsing a query as described above) referencing a table (“the subject table”) of adatabase 1002 on a source. Themethod 1200 is particularly advantageous where the source is on a different computing platform 102 a-102 c than thesystem 1000. However, this is not required. - The operation may be executed 1204 with respect to the source. In particular, the
engine 104 may issue a request to the source that the subject table and thedatabase 1002 to perform the operation. The source returns a result of the operation to theengine 104, which returns the result to the interface 108 a-108 c that issued the query that included the operation through theunified access layer 106. - In parallel with
step 1204 and/or in a background process, theintrospector 1008 detects or monitors the operation and detects the reference to the subject table and the source. In response, theintrospector 1008 retrieves 1206 thecatalog 1100 of thedatabase 1002 and the metadata 1104 a-1104 c for all tables (other than the subject table if metadata for it is already obtained as a result of step 1204), referenced in thecatalog 1100 from the source. Theintrospector 1008 uses the information to identify the tables 1102 a-1102 c of thedatabase 1002 and to retrieve 1208 the metadata 1104 a-1104 c of the tables 1102 a-1102 c from the source. - The
introspector 1008 may further compute 1210 or retrieve metrics for the tables 1102 a-1102 c. The metrics may include statistics (mean, median, standard deviation, min, max) of the columns of the table, one or more histograms each of a column of values in the table, top K most frequent values (K being an integer), cardinality, samples of values from the table, or other data describing the table or columns of the table. - The
method 1200 may further include creating 1212 an index of thedatabase 1002 according to thecatalog 1100 and creating views (managed views 1010) for the tables 1102 a-1102 c of thedatabase 1002 using thecatalog 1100. Note that this index and these views include indexes and views created without a user request to do so and prior to any access by the user of the tables referenced by the views. These views may then be represented in theinterface 108 a to the user and these representations may be selected or otherwise accessed by the user to invoke their use by theengine 104 to process queries to thedatabase 1002. - Referring to
FIGS. 13 and 14 , the additional information gathered according to themethod 1200 may be used to provide suggestions or automated adjustments for the improvement of queries of a user. To that end asystem 1300 in theenvironment 100 may include arecommendation model builder 1302 that generates arecommendation model 1304. Therecommendation model builder 1302 may take as inputs some or all of the following: theremote metadata 1004, the managedviews 1010, and alogical query history 1306 that is listing of queries received from one or more of the interfaces 108 a-108 c. - In particular, the logical query history may represent queries as strings as received from the user or as query models derived from query string received from a user. Note that queries may also be generated by an application implementing an interface 108 a-108 c. The logical query history of a query may include an actual query model of a query (including any pushing of a computation to a source or referencing of a cached data on a second source rather than a first source that is the source of record for the data) and may further include the original query model based on the query received from an interface 108 a-108 c. Queries including these pairs of actual and original query models may then be used to suggest the creation of a new view with those specific operations against a single source that could then be cached, receive pushed computations, or have some other aspect of its operation modified as described herein.
- Note further that inasmuch as the
unified access layer 106 andengine 104 operate with respect to a hybrid multi-cloud environment, the logical query history references queries with respect to multiple computing platforms 102 a-102 c and may reference queries with respect to multiple cloud-computing platforms. Multiple users may access theunified access layer 106 by way of multiple interfaces 108 a-108 c such that the local query history further includes queries generated by multiple users, e.g. a team of users. Accordingly, therecommendation model builder 1302 may be used to improve the performance of collaborative use of a hybrid multi-cloud environment in accordance with the approach described below. - The
user interface 108 a or a component in theengine 104 orunified access layer 106 may use therecommendation model 1304 tooutput suggestions 1308 to the user through theuser interface 108 a, thesuggestions 1308 being suggested modifications to queries in order to improve performance, completeness, or other property of a result of a query. The user may also providetuning parameters 1310 through theinterface 108 a in response to thesuggestions 1308. The tuningparameters 1310 may then be implemented by theengine 104 and/orunified access layer 106 with respect to a query. -
FIG. 14 illustrates amethod 1400 that may be executed by one or more computer systems implementing thesystem 1300. Themethod 1400 may include monitoring 1402 queries, such as by evaluating thelogical query history 1306 of one or more users with respect to one ormore databases 1002. Themethod 1400 may further include evaluating some or all of the structures of the queries, metrics of the queries (e.g., performance metrics as described above with respect toFIGS. 1 through 5B ), and statistics (see discussion of step 1210) of tables referenced by the queries. - The
method 1400 may include generating 1406 therecommendation model 1304 according to theevaluation 1404. Transformations may then be suggested 1408 according to therecommendation model 1304 and then implemented upon approval by the user. Alternatively, suggestions according to the recommendation model may be autonomously implemented. In particular, generating of views, materialized views, caches and the like as well as transformations of queries to take advantage of these may be performed autonomously by theunified access layer 106. In yet another alternative, transformations may be implemented for a query followed by issuing a notification to the user requesting confirmation whether the transformation should be kept or not repeated for subsequent queries (for example where the transformation involves caching that could improve performance and can be easily reversed if not desired by a user). Therecommendation model 1304 may be generated 1406 and suggest 1408 implementations of transformations according to some or all of the following examples. - Note that the multiple of the transformations examples below may be implemented in a combined fashion and that some of the transformations in the examples below may also be implemented and subsequently pruned in response to changes in the query history or databases queried over time. Likewise, multiple transformations may be generated for a single query (e.g., for different partial queries of a single query) In particular, where the basis of a transformation with respect to one or more tables as described below ceases to be present, that transformation may be removed (stop caching, remove view, etc.). Likewise, a transformation may be pruned where a change in the
policies 110 indicates that the transformation is no longer permitted (e.g., a change in available resources for a particular source). - Example 1: The query history may indicate one or more joins of table T1 from source S1 and table T2 from source S2, where S1 and S2 could be the same source or different sources. Table T1 has cardinality C1 and table T2 has cardinality C2. Based on the cardinalities C1 and C2 the computational cost of joining tables T1 and T2 may be estimated. The monetary cost of performing the join on source S1, S2, or a local computing platform P may then be estimated. The computational latency based on processing speed of sources S1, S2, and P for performing the join may be computed based on the computational cost. A network latency of various scenarios for performing the join may be computed based on the size of tables T1 and T2, where the scenarios include: transfer T1 from S1 to S2 and transfer result of join to P, transfer T2 from S2 to S1 and transfer result of joint to P, transfer T1 and T2 from S1 and S2, respectively, to P. A monetary cost of a scenario may also be computed, such as using the
query performance model 204 discussed above. Based on these measures of performance (latency and cost), a scenario may be identified that has improved performance relative to other scenarios, e.g. has a score based on a sum or weighted sum of computational latency, network latency, and cost that is less than the score of another scenario calculated in the same manner. - Accordingly, the
recommendation model 1304 may be programmed atstep 1408 to suggest implementing joins of tables T1 and T2 according to the scenario identified as having improved performance. In some embodiments, therecommendation model 1304 may further be programmed to execute joins according to the identified scenario anytime there is a join of any table TA on source Si and any other table TB on source S2 having a difference in cardinality approximating the difference in cardinality between tables T1 and T2 (e.g., the cardinality of TA being (X+ or −Y) % larger than the cardinality of TB, where X is the difference in cardinality of T1 and T2 and Y is a tolerance that is either a fixed value or a function of attributes of one or both of TA and TB. - Example 2: The metadata from various tables on various sources may be evaluated to identify related data sets. In particular, the
system 1300 may get foreign key information from the tables of remote sources and determine whether the foreign keys of tables on different sources overlap one another or the local keys of databases hosted by the local computing platform P executing thesystem 1300. For example, if all of the keys of a table T1 on source S1 match the keys of a table T2 on source S2, then tables T1 and T2 may be determined to be related such that therecommendation model 1304 may be programmed to suggest including T2 in queries referencing T1, and vice versa. - Example 3: Caching of a table may be recommended based on the query history, e.g., a frequently accessed table may be cached on a local computing platform P or a source other than the source of record for that table. Likewise, the query history may be evaluated to determine whether the cached table should be indexed. The index may be a tree index, bitmap index, or other type of index.
- For example, if filtering operations are identified in the query history with respect to a same column of a table with a threshold level of frequency (e.g., a threshold percentage of filtering operations referencing that column), then the table may be both cached and indexed with respect to that column. In some index, a table may be a join of two or more tables with respect to a particular key such that indexing with respect to that key is advantageous. In other instances, reads of a table are for specific values or are scans of the table such that indexing is not beneficial. Accordingly, indexing is not suggested in such instances. In still other instances, the query history reads or filtering is not performed predominantly (e.g., is below the threshold level of frequency) with respect to any particular key or column of a cached table such that indexing is omitted as not being advantageous.
- In some embodiments, the
recommendation model 1304 may determine whether a cached table should be partitioned in the cache. In particular, this may be done where the query history indicates that particular columns or rows of the cached table are used more than others or to facilitate parallel access of different portions of the cached table as indicated by the query history. - The
recommendation model 1304 may further suggest a frequency of updating of a cache based on frequency of writes to the table that is cached, e.g. an update frequency that is some frequency of the frequency of writes to the table. A timing of creating and updating a cache may be determined by therecommendation model 1304 according to the query history, e.g., performed during low usage periods or after performance of update cycles on the source of record for the cached data. - Example 4: A view of one or more tables on one or more sources S1-S3 may be suggested along with caching of a materialized version of this view on one of these sources, the premise P, or a different source. Performing of this action may be recommended by the
recommendation model 1304 in response to detecting a number of queries by one or more users of a team that reference the data in the view and that meet a threshold criteria, e.g. where the number of queries per unit time and the volume of data referenced indicates a performance savings from caching in a materialized view based on thequery performance model 204. In particular, a cached materialized view may represent a “partial query,” e.g. a portion of the accessed tables and operations performed on the accessed tables that were included in one or more previous queries that may then be cached in a materialized view for subsequent use. - Caching the tables as a materialized view rather than separately may be also be suggested/implemented to facilitate pushing down of computation to the source caching the materialized view. For example, where an analysis of the query history indicates that it would be advantageous to push down computation of queries performing an operation with respect to two or more tables, the
recommendation model 1304 may further suggest/implement a materialized view corresponding to that operation in order to further improve execution of subqueries. Note that this is particularly helpful where the materialized view is of tables on different sources S1-S3 such that the unified access layer's 106 virtualization enables both the detection of the possibility of the materialized view and its implementation. - Example 5: The
recommendation model 1304 may suggest separating a view defined by a user or even a view that is automatically generated according to the methods disclosed herein. In particular, this may be done to improve performance when computation involving the tables referenced by the view is pushed down to a source of the tables or a location where the tables are cached. - Example 6: The
recommendation model 1304 caching of a view or a table is disabled where usage frequency of the view or table as indicated in the query history is below a predefined threshold (i.e. cold data) or relative to other views or tables referenced in the query history. therecommendation model 1304 may further suggest disabling of caching of a table where the updating of the table is performed at a frequency below some threshold or is low relative to other tables. - Example 7: The
recommendation model 1304 may suggest pushing an asymmetric join of a large table with a small table (some predefined percentage smaller than the large table) to the computing platform hosting the large table. In contrast, for large parallel joins (a join tables that are both larger than a predefined size) on separate sources, the suggestion may be that the join be shifted to the local computing platform P and that the tables be cached on the local computing platform P. - Example 8: The query history may indicate queries to multiple sources having tables with identical keys. Accordingly, the
recommendation model 1304 may detect that and suggest expanding a query to one of these tables to include these tables from multiple sources having identical keys (i.e. blending the multiple tables). - Example 9: Many business intelligence (BI) queries (e.g., from interface 108 c) follow a similar pattern of joining tables in multiple dimension tables with a fact table to allow for grouping on, filtering by, or displaying dimension details. The BI tool will then typically execute these joins for every single query that it executes. Similar issues exist for machine learning (ML) model building, since the source data may have to be queried multiple times in order to generate a model or perform other functions of a ML algorithm.
- Accordingly, the
recommendation model 1304 may detect such activity and suggest or implement transformations accordingly. For example, instead of computing a join for every single query from the BI tool or ML algorithm, the join can be materialized (denormalized) on the local computing platform P, a source of one or both of the tables joined, or another source to which the materialized view is pushed in order to improve performance of a subsequent computation according to the methods disclosed herein. This will improve performance of downstream queries that reference that join. - This approach improves performance especially in cases where the tables to be joined are on different sources. For example, if a join as described in Example 9 is of tables on different sources, the
recommendation model 1304 may suggest a cached materialized view of this join on the local computing platform P. The view may be a complete join of all the original tables referenced in the preceding query. Since the tables are on different sources, the virtualization functionality of theunified access layer 106 may be used to retrieve the data from the different sources and create the materialized view. - If the tables referenced by the join are on the same source that supports creation of materialized views, this functionality may be used. Otherwise, the
unified access layer 106 may invoke creation and maintenance of the materialized view on that source, the local computing platform P, or a different source. - Note that multiple materialized views may be combined where indicated by the query history. For example, where materialized views of multiple joins are all referenced in the same query one or more times, a materialized view of the multiple joins may be created, such as on the same source on which the individual materialized views of the joins are created.
- Note also that a materialized view may be pruned (deleted) in view of the query history, such as where a frequency of occurrence of queries including the join of a materialized view is below a threshold for a predetermined time period.
- Note also that the materialized views may be created and pruned in response to changes in system capabilities. For example, where more storage is added to a source, the number of materialized views may be increased. For example, for a first system capability, a certain amount of storage is available for materialized views and a first number of joins having the highest frequency of occurrence are selected and corresponding materialized views are created. If the amount of storage increases, a second number of j oins having the highest frequency of occurrence, which may include some joins omitted from the first number of joins, may be selected and corresponding materialized views may be created. When storage decreases, the reverse of this process occurs such that some joins in the second number will be pruned as occurring less frequently than those joins in the first number of joins.
- Example 10: Many BI queries follow similar patterns of computing aggregations (e.g., count, sum, avg) of one or more metrics on results filtered on one or more dimensions. These aggregations can be generalized either:
-
- By removing the filter and then computing the aggregation within a dimension defined by the columns used in the filter (grouping by the columns of the filter).
- Aggregating at a more fine-grained level of a complex dimension and storing aggregations in a form that allows for combining partial aggregation results into a final aggregated value (drill-down optimization).
- A combination of the above two approaches.
- For example, consider the following transformation:
-
- A user submits Query 1: SELECT o_orderstatus, sum(o_totalprice) AS totalprice, max(o_orderdate) AS latest_orderdate FROM orders WHERE o_orderpriority=‘1-URGENT’ GROUP BY o_orderstatus
- In response to
Query 1, therecommendation model 1304 propose materializing (caching) the following view query (called “preaggregated” in this example) for use with subsequent queries: SELECT o_orderstatus, o_orderpriority, sum(o_totalprice) AS totalprice, max(o_orderdate) AS latest_orderdate FROM orders GROUP BY o_orderstatus, o_orderpriority - The
recommendation model 1304 then transforms subsequently received queries to use the view. For example,Query 1 if received subsequently would be transformed to: - SELECT o_orderstatus, totalprice, latest_orderdate FROM preaggregated WHER E o_orderpriority=‘1-URGENT’
- The
recommendation model 1304 will choose the above transformation to support “similar” queries where the filter condition may be changing, but the metric summarization is not. This helps to reduce the amount of data fetched from source systems and the amount of compute time. The resulting cache entries are also much smaller than the original dataset. This enables the system to cache many versions of these pre-aggregated queries - In another example, datetime columns are special since most tools treat datetime dimensions dynamically: a single datetime column can be treated like a record of year, month, day, hour, minute, etc. (.i.e. a hierarchical dimension). A transformation of a query referencing a datetime column according to the approaches described above may be as follows:
-
- The user submits Query 2: SELECT YEAR(1_commitdate) AS ‘year’, MONTH(1_commitdate) AS ‘month’, MIN(1_discount) AS min_discount, MAX(1_discount) AS max_discount FROM lineitem WHERE 1_commitdate between ‘1997 Oct. 1’ and ‘1998 Oct. 31’ GROUP BY YEAR(1_commitdate), MONTH(1_commitdate)
- In response to
Query 2, the recommendation model propose materializing a view query (called “preaggregated” in this example: SELECT1_commitdate, MIN(1_discount) AS min_discount, MAX(1_discount) AS max_discount FROM lineitem GROUP BY YEAR(1_commitdate), MONTH(1_commitdate) - The
recommendation model 1304 may then recommend transforming subsequent queries in view of the view created. For example,Query 2 if subsequently received could be transformed as follows: SELECT YEAR(1_commitdate) AS ‘year’, MONTH(1_commitdate) AS ‘month’, min_discount, max_discount FROM ‘preaggregated’ WHERE 1_commitdate between ‘1997 Oct. 1’ and ‘1998 Oct. 31’
- The above transformation allows the user to query with different time ranges while reducing the amount of data cached and reducing the compute time of the aggregations.
- Example 11: The approach of Examples 9 and 10 may be generalized to include more generalized improvements in the execution of subqueries. For example, BI queries and ML model building will use subqueries as part of their computations. With Push-Down of computations enabled, the incoming query may be broken up into smaller subqueries to be pushed down to the source systems. Whether to push down a subquery to a source may be determined according to the approaches described herein. In response to instances where a subquery is pushed down for data access and computation by a remote source, the
recommendation model 1304 may suggest creating a view corresponding to that subquery on the remote source, which may be a materialized view. - Example 12: In some instances, lookup/dimension data may exist on a different source S1-S3 from data sets they can be joined with. Alternatively, lookup/dimension data may need to be cleaned and prepared by the
unified access layer 106 before it can be joined to data that exists on another source. Therecommendation model 1304 may therefore detect a join in such a scenario and suggest or implement pushing of the data for a remote table or results of view maintained by theunified access layer 106 onto the source system to enable a join. This enables more computation to be performed on a source system instead of on the local computing platform P. - Referring again to
FIG. 14 , in some embodiments, suggested transformations perstep 1408 may be presented to a user for acceptance. For example, as described above with respect toFIG. 8 , an object including data defining the suggestion may be created and a representation of it displayed to a user. The user may then interact with the representation to view the suggestion and to invoke execution of the suggestion. - Queries may then be received 1410 and executed with respect to views and cached data as defined at
step 1408. As noted above, theunified access layer 106 may transform the query to change references to sources of record to references to cached tables and materialized views on the premise or a source other than the source of record. - The
method 1400 may further include augmenting 1412 thequery history 1306 according to the queries received atstep 1410. Likewise, any new tables referenced by a query atstep 1408 may be used to augment thequery history 1306 andmetadata 1004. This new data may then be processed according to themethod - Referring to
FIG. 15 , the illustratedmethod 1500 may be executed in theenvironment 100, such as by one or more computing systems implementing theengine 104,unified access layer 106, and/oruser interface 108 a. - The
method 1500 may include obtaining 1502 table data for one or more tables on one or more sources, including one or more remote sources.Step 1502 may include executing themethod 1200 with respect to one or more queries. Themethod 1500 may further include collecting 1504 the query models of one or more queries. The query models may be hierarchies as described above with respect toFIGS. 5A and 5B in which each node is either a data access with respect to a source or an operation performed with respect to an output of another node (operation or data access). - The
method 1500 may further include generating a hash with respect to one or more tables discovered atstep 1502 or referenced in queries atstep 1504. For example, the hash for a table may be a function of some or all of the following attributes of the table: keys, number of columns, number of rows, statistics (e.g., some or all of the statistics described with respect to the method 1200), a query model or a portion of a query model that occurs in one or more queries that reference the table. With respect to query model, one or more queries may have query models that reference a table, the query model used atstep 1508 may include a portion of a lineage that both references the table and is common to multiple queries: e.g. a join, aggregation, or other operation that is performed by multiple queries with respect to the table. Where the common portion of the query models includes a join, the query models may be deemed common if the joins of the query models reference the table and a same second table. - The hash from
step 1506 may be according to a function such that the output of a first hash will be increasingly closer in value to a second hash where first data (steps 1502, 1504) used to generate the first hash is increasingly similar to second data used to generate the second hash. For example, the hash function may be a locality-sensitive hash function. - The locality-sensitive hash function of a table may be a function of some or all of the following attributes of a table: features of a schema of the table, statistical values based on the table, the top K most frequent values (K being an integer), a histogram of one or more columns of the table, a sample of data from one or more rows and columns of the table, or the like. Where a sample of data is used when comparing two tables, the column and row positions of the data sampled may be selected based on statistical analysis of the tables being compared. For example, a machine learning model may be trained with pairs of tables labeled with their status (matching or non-matching) to determine statistical metrics that may be used to select sample for including in hash that are likely to be relevant and facilitate the successful identification of matching tables.
- The
method 1500 may include identifying 1508 match candidates, i.e., sets of tables whose corresponding hashes fromstep 1506 are identical. For each set, themethod 1500 may include determining 1510 whether the tables are in fact identical. For example, for a first table in a set and a second table in a set, the values fromstep 1502 for the first and second table may be compared. For example, the same keys, the same number of columns, same data type in each column, the same number of rows, the same statistics, the same histograms for the same columns, etc. In some embodiments, one or more heuristics based on the first and second tables may be compared. They may include heuristics based on continuous and/or categorical data (i.e., non-numerical data) contained in the first and second tables. - If these values are found to be identical, the tables may be verified 1510 to be identical. In other embodiments, if the values from
step 1502 are found to be identical, then step 1510 may include performing a value-by-value comparison for the tables, i.e. comparing a value at a row and column position in the first table to the value at the same row and column position in the second table. If all values are found to be identical, the tables may be verified 1510 to match. - If two or more tables are verified 1510 to be identical, then various actions may be taken with respect to these tables. For example, a policy constraint (e.g., from policies 110) may be imposed 1512 that restricts use of all but one of the tables. For example, a table may be constrained to be stored on a particular source such that identifying the table on a different source or on the local computing platform P is a violation of this policy. Accordingly, the
method 1500 may include suggesting using the copy of the table on the source of record in the place of identical cached copies of the table found elsewhere (e.g., local computing platform P). - The
method 1500 may include determining 1514 trust scores for the copies of the table to determine which is more trustworthy, e.g., more current, authentic, more secure, etc. For example, a trust score could be calculated for each copy of a table that is a function of whether the table is stored by a source of record, how frequently the copy of the table is updated (e.g., a cached copy), a level of security provided by a source hosting the copy of the table, or other parameters. These trust scores may then be presented to the user by way of theinterface 108 a such that the user may select among the copies of the table to use for subsequent queries. Alternatively, the copy with the highest trust score may be automatically selected for processing subsequent queries. For example, references to non-selected copies may be replaced with references to the selected copy with the highest trust score when executing subsequent queries. - Where copies of a table are found 1510 to match, the
method 1500 may further include identifying 1516 a performance improvement from substituting one table from a set of matching tables with another copy. In particular, this may include using thequery performance model 204. For example, in themethod 400, alternatives are evaluated to determine their performance (see discussion ofsteps 406 and 408). Accordingly, step 406 may include identifying alternatives in view of the known matches. For example, table T1 is on source S1 and table T2 that is found to be a match is on source S2. Accordingly, an operation referencing table T1 may have the following expanded list of alternatives: perform access on S1 with processing on local computing platform P, perform access on P (cache T1 from S1) and processing on local computing platform P, perform access and processing on S1 (provided S1 has capacity to do so), perform access on S2 with processing on local computing platform P, perform access on local computing platform P (cache T1 from S2) and processing on local computing platform P, and perform access and processing on S2 (provided S2 has capacity to do so). Accordingly,step 1516 may include automatically redirecting accesses from one table to a matching table on a different source in order to improve performance or providing suggestions to do so in the form of suggestions through theinterface 108 a, which the user may then select to invoke implementation as described above with respect toFIG. 4 . - In another example, a table T1 on source S1 may be determined to be part of a composite of table T1 and T2 on local computing platform P or another source S2. A performance improvement may include suggesting use of the composite. An accuracy improvement may include suggesting using the tables T1 and T2 from the source S1 rather than the composite to avoid using non-current data.
-
FIG. 16 illustrates asystem 1600 that may execute within theenvironment 100 in order to process queries according to user preference regarding balancing of latency with respect to computational resources and monetary cost required to achieve that latency. - In particular, the
system 1600 may include atuning module 1602 that executes on one or more computer systems executing theengine 104 andunified access layer 106 or some other computer system. Thetuning module 1602 may access thequery performance model 204 andpolicies 110. Thetuning module 1602 may further facilitate the definition and implementation of a push policy 1604,cache policy 1606, andscale policy 1608. In particular, the push policy 1604 specifies parameters defining what computations should be pushed to the source hosting the data that is the subject of the computation. Thecache policy 1606 specifies which data should be maintained in a cache on the local computing platform P or a cache on a different source S1-S3 than the source of record for that data and may further specify parameters regarding writing back the cached data to the source from which it was retrieved or from updating the cached data from the source. Thescale policy 1608 specifies when additional resources (computational or storage) will be acquired from acloud computing platform 102 a or other scalable computing resource. - The
tuning module 1602 may define the policies 1604-1608 in response to user inputs 1610 received by way of theinterface 108 a. Thetuning module 1602 may likewise present an interface for a user to see implications of tuning and to provide the tuning parameters to thetuning module 1602. -
FIG. 17 illustrates amethod 1700 that may be executed by thetuning module 1600. Themethod 1700 may include receiving 1702 constraints. The constraints may include some or all of the constraints included in thepolicies 110, such as those specifying which source is able to perform computations, what data (e.g. which tables from which source) is permitted to be cached on the local computing platform P, how often data may be read into a cache, how long data in cache is valid, or the like. Theconstraints 1702 may further include other constraints such as a budgetary constraint specifying how much money, processing time, cache capacity, or other resource is allocated to a user, application, workflow, workspace, or other division to which a query may be assigned. The constraints may specify these constraints in general or may specify constraints for each source of a plurality of sources for a user, application, workflow, workspace, or other division to which a query may be assigned. - The
method 1700 may further include defining 1704 classifications of queries. The classifications may be applied to each query such that each query is processed according to constraints or policies 1604-1608 corresponding to its classification. Classifications may be associated with a particular user, application, workflow, workspace, or other division to which a query may be assigned. For example, some applications, e.g.data science interface 108 b may perform high volume computations without needing low latency. In contrast, a business intelligence tool 108 c may be expected to provide low latency. The classifications atstep 1704 may be received from an operator or be pre-defined by an administrator or developer of thetuning module 1602. - Queries may also be classified based on attributes. For example, queries may be classified based on the volume of data referenced by the query, e.g. a first range of referenced volumes of data being assigned to a first classification, a second (different) range of referenced volumes being assigned to a second classification, and so on for any number of ranges. Queries may also be classified based on the complexity of the computation, e.g. a score may be assigned as a function of a number and type of computations and a score may then be classified to one of a plurality of classifications corresponding to a range including that score. In still other embodiments, a classification may be manually assigned to a query by a user.
- The
method 1700 may further include defining 1706 expected performance for each classification. The expected performance may be specified in terms of a quality of service (QoS), service level agreement (SLA), e.g. a desired latency (average, maximum) for queries belonging to that classification. In still other embodiments, the expected performance may be specified in terms of data volume, e.g. A +V*L, where A is a constant, V is the data volume, and L is constant indicating a relationship between data volume and a desired latency. Of course, any other function may be used to determine a desired latency of a query in terms of data volume. The function for each classification may be different. The expected performance may be input by a user, such as by way of the interface 108. For example, a slider, dial, or other input field may enable a user to specify the expected performance. - In some embodiments, the expected performance may be defined with respect to the constraints from
step 1702. In particular, a minimum latency may be estimated based on budgetary constraints, caching constraints, pushing constraints, or other constraints that limit available options and resources for reducing latency. For example, using thequery performance model 204, a latency and consumed computing resources for a given scenario (caching, pushing computation, etc.) for executing a query with respect to a volume of data (e.g., an expected volume of data, maximum volume of data, or some other value) may be calculated. The cost of implementing that scenario may then be determined based on the computational resources consumed by that scenario. - In some embodiments, performance expectations may be specified only in relative terms, e.g., a classification may be assigned a high expected performance, medium expected performance, or low expected performance. Queries classified as having high expected performance will be performed with a scenario providing higher performance than queries classified as medium or low. Likewise, queries classified as having medium expected performance will be performed with a scenario providing higher performance than queries having the low expected performance. Any number of levels may be defined in this manner.
- The
method 1700 may further include generating 1708 policies 1604-1608 for each classification according to the expected performances determined atstep 1706. In particular, for an expected performance that has a high relative latency or a specified latency requiring it, performance enhancements such as caching, pushing computation to a source of data, or the like may be mandated by thepolicies 1604, 1606. Likewise, scaling out to achieve the high relative latency or specified latency may be specified in thepolicy 1608. Similarly, for a classification assigned a low latency (relative latency or actual specified latency value), the policies may prohibit caching or consumption of resources on sources that have monetary cost in order to avoid impacting classifications with a higher expected performance and avoid incurring costs for queries that are of low priority. - In some embodiments, generating 1708 a
caching policy 1606 may include generating an indexing policy for cached data, i.e. whether or not cached data is to be indexed and what key or column is to be indexed. The type of the index may also be specified (tree, bit map). - In some embodiments, queries are executed according to the policies defined at
step 1708. This may include processing queries using the approach described below with respect toFIG. 18 . The performance of these queries (e.g. latency) may be measured 1710 and evaluated 1712 with respect to the expected performances for the queries, such as the expected performances defined atstep 1706. If the expected performance is not met, then themethod 1700 may include adjusting 1714 the policies. For example, where an expected performance is not met by one or more queries or a classification, the policies for that classification may be modified to include further performance enhancements, such as materialized views on the premise P corresponding to one or more operations (joins, sub-queries, groupings, aggregations, etc.) of the queries, caching of tables referenced by the queries on the premise P. Where indicated by thequery performance model 204, performance enhancement may include shifting computation of these operations to a source of the tables referenced by the operations or creating a materialized view on the source of the tables, the materialized view implementing an operation included in the queries such as joins, sub-queries, groupings, aggregations, or the like. Performance enhancements may include any of the approaches described herein for transforming or improving execution of a query in non-heterogeneous data management and analytic computing platform. - Note that where a target latency is exceeded, adjustments at step 1714 may include reversing or undoing one or more performance enhancements such as are mentioned in the previous paragraph in order to avoid unnecessary consumption of resources.
-
FIG. 18 illustrates amethod 1800 for processing queries according to policies 1604-1608 and classifications according to themethod 1700. Themethod 1800 may be executed by thetuning module 1600. - The
method 1800 may include receiving 1802 a query and classifying 1804 the query. In particular, this may include determining a context that issued the query, e.g., a user account, application, workflow, workspace, or other division defining a context in which the query was issued. A classification for that context as defined according to themethod 1700 may then be retrieved. As noted above, a classification may be based on attributes (data volume, complexity) of the query itself as noted above. Classification could also be performed manually, e.g., a user tagging a query with a classification. Themethod 1800 may further include retrieving 1806 policies 1604-1608 assigned to that classification. - For each data access of the query, the
method 1800 may include performing 1808 anevaluation 1810 whether data referenced by the data access is or is specified to be cached 1810. In particular, theengine 104 may cache data according to thecache policy 1606. Caching may be performed prior to receiving a query or may be performed upon receiving a first query with respect to data that is permitted or required to be cached according to thecache policy 1606, e.g. data from a source that is to be cached according to thecache policy 1606. - If the data is found 1810 to be cached, then the data access may be performed 1812 with respect to the cached data. The cached data may also be updated as part of the access or in response to detecting a change in the version of the cached data in the source from which it was retrieved. Evaluating of whether a cached table is different from the version of the table at the source from which it is retrieved may be performed prior to data access, periodically, or in response to detecting writing of data to that table on the source.
- In order to detect changes, a hash may be calculated of a table as stored at the source and compared to a hash of that table as cached on the local computing platform P. If the hashes differ, the table as stored on the source may be reloaded into the cache on the local computing platform P. Note that the hash function used in such cases may be a cryptographic hash rather than a locality-sensitive hash (e.g. MD5, SHA-1, or the like). In other examples, the “hash” of the source and cache versions of the table may be substituted with another characteristic of these tables, such as a Max/Count (maximum value in the table divided by number of values in the table) for these tables.
- As noted above, various levels of caching may be specified in the
cache policy 1606. Accordingly, the table is cached without indexing where so specified in thepolicy 1606 and is cached with indexing according to a key or column specified in thepolicy 1606 when so specified by thepolicy 1606. - In the event that caching is not permitted according to the
cache policy 1606 for the classification fromstep 1804, themethod 1800 may include performing 1814 a remote data access from the source without caching. - For the query from
step 1802, themethod 1800 may include performing 1816 for each computational operation (join, aggregation, etc.), evaluating 1818 the push policy 1604 for the classification fromstep 1804. If the push policy 1604 for the source of data referenced by the operation and the classification fromstep 1804 indicates that the computation should be pushed to the source of the data referenced by the operation, then this is performed atstep 1822. Otherwise, the operation is performed 1820 locally, e.g. on the local computing platform P. - If the policy 1604 permits pushing to a remote computing platform, the
method 1800 may further include evaluating whether operations performed by the remote computing platform are meeting 1824 latency targets. For example, the scale outpolicy 1608 for the remote computing platform may specify a latency target. If this target is found 1824 not to be met by the processing of one or more operations, themethod 1800 may include evaluating 1826 whether scaling out of the remote computing platform is permitted according to the scale outpolicy 1608 that applies to the remote computing platform and the classification fromstep 1804. For example, this may include evaluating 1608 whether current computing resources allocated on the remote computing platform is at a maximum permitted amount or whether the latency meets a condition at which scale out is to be performed as specified by the scale outpolicy 1608. If so, then the remote computing platform is scaled 1828 out and additional computing and/or storage resources are requested (e.g., purchased) for use. -
FIG. 19 is a block diagram illustrating anexample computing device 1900. Computing platforms 102 a-102 c, interfaces 108 a-108 c, or other computing devices for implementing systems and methods as described above may have some or all of the attributes of the illustratedcomputing device 1900. In some embodiments, a cluster of computing devices interconnected by a network may be used to implement any one or more components of the invention. -
Computing device 1900 may be used to perform various procedures, such as those discussed herein.Computing device 1900 can function as a server, a client, or any other computing entity. Computing device can perform various monitoring functions as discussed herein, and can execute one or more application programs, such as the application programs described herein.Computing device 1900 can be any of a wide variety of computing devices, such as a desktop computer, a notebook computer, a server computer, a handheld computer, tablet computer and the like. -
Computing device 1900 includes one or more processor(s) 1902, one or more memory device(s) 1904, one or more interface(s) 1906, one or more mass storage device(s) 1908, one or more Input/output (I/O) device(s) 1910, and adisplay device 1930 all of which are coupled to abus 1912. Processor(s) 1902 include one or more processors or controllers that execute instructions stored in memory device(s) 1904 and/or mass storage device(s) 1908. Processor(s) 1902 may also include various types of computer-readable media, such as cache memory. - Memory device(s) 1904 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 1914) and/or nonvolatile memory (e.g., read-only memory (ROM) 1916). Memory device(s) 1904 may also include rewritable ROM, such as Flash memory.
- Mass storage device(s) 1908 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown in
FIG. 19 , a particular mass storage device is ahard disk drive 1924. Various drives may also be included in mass storage device(s) 1908 to enable reading from and/or writing to the various computer readable media. Mass storage device(s) 1908 include removable media 1926 and/or non-removable media. - I/O device(s) 1910 include various devices that allow data and/or other information to be input to or retrieved from
computing device 1900. Example I/O device(s) 1910 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like. -
Display device 1930 includes any type of device capable of displaying information to one or more users ofcomputing device 1900. Examples ofdisplay device 1930 include a monitor, display terminal, video projection device, and the like. - Interface(s) 1906 include various interfaces that allow
computing device 1900 to interact with other systems, devices, or computing environments. Example interface(s) 1906 include any number of different network interfaces 1920, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet. Other interface(s) include user interface 1918 andperipheral device interface 1922. The interface(s) 1906 may also include one or more user interface elements 1918. The interface(s) 1906 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like. -
Bus 1912 allows processor(s) 1902, memory device(s) 1904, interface(s) 1906, mass storage device(s) 1908, and I/O device(s) 1910 to communicate with one another, as well as other devices or components coupled tobus 1912.Bus 1912 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1394 bus, USB bus, and so forth. - For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of
computing device 1900, and are executed by processor(s) 1902. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.
Claims (22)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/541,901 US20200356568A1 (en) | 2019-05-08 | 2019-08-15 | Pre-Emptive Database Processing For Performance Enhancement In A Hybrid Multi-Cloud Database Environment |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962845078P | 2019-05-08 | 2019-05-08 | |
US16/541,901 US20200356568A1 (en) | 2019-05-08 | 2019-08-15 | Pre-Emptive Database Processing For Performance Enhancement In A Hybrid Multi-Cloud Database Environment |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200356568A1 true US20200356568A1 (en) | 2020-11-12 |
Family
ID=73046024
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/541,901 Abandoned US20200356568A1 (en) | 2019-05-08 | 2019-08-15 | Pre-Emptive Database Processing For Performance Enhancement In A Hybrid Multi-Cloud Database Environment |
US16/541,941 Active 2040-07-23 US11449506B2 (en) | 2019-05-08 | 2019-08-15 | Recommendation model generation and use in a hybrid multi-cloud database environment |
US16/541,852 Abandoned US20200356559A1 (en) | 2019-05-08 | 2019-08-15 | Query Combination In A Hybrid Multi-Cloud Database Environment |
US16/542,045 Active 2040-01-28 US11216461B2 (en) | 2019-05-08 | 2019-08-15 | Query transformations in a hybrid multi-cloud database environment per target query performance |
US16/541,810 Abandoned US20200356563A1 (en) | 2019-05-08 | 2019-08-15 | Query performance model generation and use in a hybrid multi-cloud database environment |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/541,941 Active 2040-07-23 US11449506B2 (en) | 2019-05-08 | 2019-08-15 | Recommendation model generation and use in a hybrid multi-cloud database environment |
US16/541,852 Abandoned US20200356559A1 (en) | 2019-05-08 | 2019-08-15 | Query Combination In A Hybrid Multi-Cloud Database Environment |
US16/542,045 Active 2040-01-28 US11216461B2 (en) | 2019-05-08 | 2019-08-15 | Query transformations in a hybrid multi-cloud database environment per target query performance |
US16/541,810 Abandoned US20200356563A1 (en) | 2019-05-08 | 2019-08-15 | Query performance model generation and use in a hybrid multi-cloud database environment |
Country Status (3)
Country | Link |
---|---|
US (5) | US20200356568A1 (en) |
EP (1) | EP3963468A4 (en) |
WO (5) | WO2020227652A1 (en) |
Families Citing this family (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11410129B2 (en) | 2010-05-01 | 2022-08-09 | Monday.com Ltd. | Digital processing systems and methods for two-way syncing with third party applications in collaborative work systems |
WO2021161104A1 (en) | 2020-02-12 | 2021-08-19 | Monday.Com | Enhanced display features in collaborative network systems, methods, and devices |
WO2021099839A1 (en) | 2019-11-18 | 2021-05-27 | Roy Mann | Collaborative networking systems, methods, and devices |
WO2021144656A1 (en) | 2020-01-15 | 2021-07-22 | Monday.Com | Digital processing systems and methods for graphical dynamic table gauges in collaborative work systems |
US11436359B2 (en) | 2018-07-04 | 2022-09-06 | Monday.com Ltd. | System and method for managing permissions of users for a single data type column-oriented data structure |
US11698890B2 (en) | 2018-07-04 | 2023-07-11 | Monday.com Ltd. | System and method for generating a column-oriented data structure repository for columns of single data types |
US11621966B1 (en) * | 2019-02-15 | 2023-04-04 | DataVisor, Inc. | Detection of malicious user accounts of an online service using feature analysis |
US11269861B2 (en) * | 2019-06-17 | 2022-03-08 | Bank Of America Corporation | Database tool |
US11573964B2 (en) * | 2019-07-17 | 2023-02-07 | At&T Intellectual Property I, L.P. | Reducing database system query transaction delay |
US11175816B2 (en) | 2019-11-18 | 2021-11-16 | Monday.Com | Digital processing systems and methods for automatic user time zone updates in collaborative work systems |
US11269879B2 (en) * | 2020-01-13 | 2022-03-08 | Google Llc | Optimal query scheduling according to data freshness requirements |
US11372871B1 (en) * | 2020-02-21 | 2022-06-28 | Rapid7, Inc. | Programmable framework for distributed computation of statistical functions over time-based data |
US20210311942A1 (en) * | 2020-04-02 | 2021-10-07 | International Business Machines Corporation | Dynamically altering a query access plan |
IL297858A (en) | 2020-05-01 | 2023-01-01 | Monday Com Ltd | Digital processing systems and methods for enhanced collaborative workflow and networking systems, methods, and devices |
US20240184989A1 (en) | 2020-05-01 | 2024-06-06 | Monday.com Ltd. | Digital processing systems and methods for virtualfile-based electronic white board in collaborative work systems systems |
US11277361B2 (en) | 2020-05-03 | 2022-03-15 | Monday.com Ltd. | Digital processing systems and methods for variable hang-time for social layer messages in collaborative work systems |
US11880354B2 (en) * | 2020-06-30 | 2024-01-23 | Actian Corporation | Patchindex system and method for updatable materialization of approximate constraints |
US12124454B2 (en) * | 2020-08-04 | 2024-10-22 | International Business Machines Corporation | Shadow experiments for serverless multi-tenant cloud services |
US11531452B2 (en) | 2021-01-14 | 2022-12-20 | Monday.com Ltd. | Digital processing systems and methods for group-based document edit tracking in collaborative work systems |
US12080433B2 (en) * | 2021-02-10 | 2024-09-03 | International Business Machines Corporation | Healthcare application insight compilation sensitivity |
CN113656437B (en) * | 2021-07-02 | 2023-10-03 | 阿里巴巴新加坡控股有限公司 | Model construction method for predicting execution cost stability of reference |
CN113535705B (en) * | 2021-08-03 | 2024-02-02 | 佛山赛思禅科技有限公司 | SFAD cuckoo filter and repeated data deleting method based on SFAD cuckoo filter |
US12056664B2 (en) | 2021-08-17 | 2024-08-06 | Monday.com Ltd. | Digital processing systems and methods for external events trigger automatic text-based document alterations in collaborative work systems |
US11397808B1 (en) * | 2021-09-02 | 2022-07-26 | Confluera, Inc. | Attack detection based on graph edge context |
US11853326B2 (en) * | 2021-10-14 | 2023-12-26 | Google Llc | Data warehouse indexed string token search |
US11960369B2 (en) * | 2021-10-26 | 2024-04-16 | International Business Machines Corporation | Efficient creation of a secondary database system |
US12105948B2 (en) | 2021-10-29 | 2024-10-01 | Monday.com Ltd. | Digital processing systems and methods for display navigation mini maps |
US11893015B2 (en) | 2021-11-18 | 2024-02-06 | International Business Machines Corporation | Optimizing query performance in virtual database |
US20230237060A1 (en) * | 2022-01-24 | 2023-07-27 | Dell Products, L.P. | Suggestion engine for data center management and monitoring console |
US12008001B2 (en) * | 2022-05-27 | 2024-06-11 | Snowflake Inc. | Overlap queries on a distributed database |
US20240232176A9 (en) * | 2022-10-20 | 2024-07-11 | Dell Products L.P. | Automated creation of optimized persistent database views |
US11741071B1 (en) | 2022-12-28 | 2023-08-29 | Monday.com Ltd. | Digital processing systems and methods for navigating and viewing displayed content |
US11886683B1 (en) | 2022-12-30 | 2024-01-30 | Monday.com Ltd | Digital processing systems and methods for presenting board graphics |
US11893381B1 (en) | 2023-02-21 | 2024-02-06 | Monday.com Ltd | Digital processing systems and methods for reducing file bundle sizes |
CN117390079B (en) * | 2023-09-05 | 2024-10-15 | 西安易诺敬业电子科技有限责任公司 | Data processing method and system for data center |
US12118401B1 (en) | 2023-11-28 | 2024-10-15 | Monday.com Ltd. | Digital processing systems and methods for facilitating the development and implementation of applications in conjunction with a serverless environment |
Family Cites Families (72)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5812996A (en) | 1994-07-12 | 1998-09-22 | Sybase, Inc. | Database system with methods for optimizing query performance with a buffer manager |
GB9712459D0 (en) | 1997-06-14 | 1997-08-20 | Int Computers Ltd | Secure database system |
US6339769B1 (en) | 1998-09-14 | 2002-01-15 | International Business Machines Corporation | Query optimization by transparently altering properties of relational tables using materialized views |
CA2281331A1 (en) | 1999-09-03 | 2001-03-03 | Cognos Incorporated | Database management system |
US6470329B1 (en) * | 2000-07-11 | 2002-10-22 | Sun Microsystems, Inc. | One-way hash functions for distributed data synchronization |
US7475405B2 (en) | 2000-09-06 | 2009-01-06 | International Business Machines Corporation | Method and system for detecting unusual events and application thereof in computer intrusion detection |
US20020095405A1 (en) | 2001-01-18 | 2002-07-18 | Hitachi America, Ltd. | View definition with mask for cell-level data access control |
US20040068559A1 (en) | 2002-10-04 | 2004-04-08 | Shaw Terry D. | Method for detection of unauthorized computer system usage |
US7426520B2 (en) * | 2003-09-10 | 2008-09-16 | Exeros, Inc. | Method and apparatus for semantic discovery and mapping between data sources |
US7657501B1 (en) | 2004-08-10 | 2010-02-02 | Teradata Us, Inc. | Regulating the workload of a database system |
US7310647B2 (en) | 2003-12-24 | 2007-12-18 | Oracle International Corporation | Column masking of tables |
US8073836B2 (en) | 2004-03-01 | 2011-12-06 | Epicor Software Corporation | System for viewing databases |
US20050203881A1 (en) | 2004-03-09 | 2005-09-15 | Akio Sakamoto | Database user behavior monitor system and method |
US7395258B2 (en) | 2004-07-30 | 2008-07-01 | International Business Machines Corporation | System and method for adaptive database caching |
US20060074897A1 (en) | 2004-10-04 | 2006-04-06 | Fergusson Iain W | System and method for dynamic data masking |
US7610264B2 (en) * | 2005-02-28 | 2009-10-27 | International Business Machines Corporation | Method and system for providing a learning optimizer for federated database systems |
US8244745B2 (en) | 2005-12-29 | 2012-08-14 | Nextlabs, Inc. | Analyzing usage information of an information management system |
US20080195577A1 (en) * | 2007-02-09 | 2008-08-14 | Wei Fan | Automatically and adaptively determining execution plans for queries with parameter markers |
US7769729B2 (en) | 2007-05-21 | 2010-08-03 | Sap Ag | Block compression of tables with repeated values |
US8146165B2 (en) | 2007-08-16 | 2012-03-27 | Verizon Patent And Licensing Inc. | Method and apparatus for providing a data masking portal |
US8181221B2 (en) | 2007-08-16 | 2012-05-15 | Verizon Patent And Licensing Inc. | Method and system for masking data |
US20090100527A1 (en) | 2007-10-10 | 2009-04-16 | Adrian Michael Booth | Real-time enterprise data masking |
US8595834B2 (en) | 2008-02-04 | 2013-11-26 | Samsung Electronics Co., Ltd | Detecting unauthorized use of computing devices based on behavioral patterns |
US8538985B2 (en) | 2008-03-11 | 2013-09-17 | International Business Machines Corporation | Efficient processing of queries in federated database systems |
US9047485B2 (en) | 2008-03-12 | 2015-06-02 | International Business Machines Corporation | Integrated masking for viewing of data |
US9311369B2 (en) | 2008-04-28 | 2016-04-12 | Oracle International Corporation | Virtual masked database |
US8775413B2 (en) * | 2008-06-30 | 2014-07-08 | Teradata Us, Inc. | Parallel, in-line, query capture database for real-time logging, monitoring and optimizer feedback |
US9400879B2 (en) | 2008-11-05 | 2016-07-26 | Xerox Corporation | Method and system for providing authentication through aggregate analysis of behavioral and time patterns |
US8769684B2 (en) | 2008-12-02 | 2014-07-01 | The Trustees Of Columbia University In The City Of New York | Methods, systems, and media for masquerade attack detection by monitoring computer user behavior |
US8250630B2 (en) | 2009-03-05 | 2012-08-21 | Cisco Technology, Inc. | Detecting unauthorized computer access |
US8417690B2 (en) * | 2009-05-15 | 2013-04-09 | International Business Machines Corporation | Automatically avoiding unconstrained cartesian product joins |
US8554801B2 (en) | 2009-07-10 | 2013-10-08 | Robert Mack | Method and apparatus for converting heterogeneous databases into standardized homogeneous databases |
US8321443B2 (en) | 2010-09-07 | 2012-11-27 | International Business Machines Corporation | Proxying open database connectivity (ODBC) calls |
US8356027B2 (en) | 2010-10-07 | 2013-01-15 | Sap Ag | Hybrid query execution plan generation and cost model evaluation |
EP2646928A4 (en) * | 2010-12-03 | 2016-11-09 | Hewlett Packard Entpr Dev Lp | Systems and methods for performing a nested join operation |
US11544288B2 (en) * | 2010-12-23 | 2023-01-03 | Mongodb, Inc. | Systems and methods for managing distributed database deployments |
US8983985B2 (en) | 2011-01-28 | 2015-03-17 | International Business Machines Corporation | Masking sensitive data of table columns retrieved from a database |
US8826370B2 (en) | 2011-03-22 | 2014-09-02 | Informatica Corporation | System and method for data masking |
US8930410B2 (en) | 2011-10-03 | 2015-01-06 | International Business Machines Corporation | Query transformation for masking data within database objects |
US8762406B2 (en) | 2011-12-01 | 2014-06-24 | Oracle International Corporation | Real-time data redaction in a database management system |
US9171182B2 (en) | 2012-10-31 | 2015-10-27 | Tata Consultancy Services Limited | Dynamic data masking |
US9813318B2 (en) * | 2013-03-15 | 2017-11-07 | International Business Machines Corporation | Assessment of best fit cloud deployment infrastructures |
US9633227B2 (en) | 2013-07-29 | 2017-04-25 | Tencent Technology (Shenzhen) Company Limited | Method, apparatus, and system of detecting unauthorized data modification |
EP3039574A4 (en) | 2013-08-29 | 2017-03-22 | Hewlett-Packard Enterprise Development LP | Queries involving multiple databases and execution engines |
US9275252B2 (en) | 2013-09-30 | 2016-03-01 | Bank Of America Corporation | Enhanced view compliance tool |
US9621680B2 (en) | 2013-10-21 | 2017-04-11 | Globalfoundries Inc. | Consistent data masking |
US9372891B2 (en) * | 2013-12-13 | 2016-06-21 | Red Hat, Inc. | System and method for querying hybrid multi data sources |
US20150248462A1 (en) | 2014-02-28 | 2015-09-03 | Alcatel Lucent | Dynamically improving streaming query performance based on collected measurement data |
US9600554B2 (en) | 2014-03-25 | 2017-03-21 | AtScale, Inc. | Interpreting relational database statements using a virtual multidimensional data model |
US9762603B2 (en) | 2014-05-10 | 2017-09-12 | Informatica Llc | Assessment type-variable enterprise security impact analysis |
US10437843B2 (en) | 2014-07-29 | 2019-10-08 | Microsoft Technology Licensing, Llc | Optimization of database queries via transformations of computation graph |
US20160048408A1 (en) | 2014-08-13 | 2016-02-18 | OneCloud Labs, Inc. | Replication of virtualized infrastructure within distributed computing environments |
US9141659B1 (en) | 2014-09-25 | 2015-09-22 | State Farm Mutual Automobile Insurance Company | Systems and methods for scrubbing confidential insurance account data |
US10108666B2 (en) * | 2015-03-10 | 2018-10-23 | Oracle International Corporation | Adaptive handling of skew for distributed joins in a cluster |
US9536072B2 (en) | 2015-04-09 | 2017-01-03 | Qualcomm Incorporated | Machine-learning behavioral analysis to detect device theft and unauthorized device usage |
US10528540B2 (en) | 2015-05-11 | 2020-01-07 | AtScale, Inc. | Dynamic aggregate generation and updating for high performance querying of large datasets |
CN106909473A (en) * | 2015-12-23 | 2017-06-30 | 阿里巴巴集团控股有限公司 | A kind of node restart after data processing method and equipment |
US10762539B2 (en) | 2016-01-27 | 2020-09-01 | Amobee, Inc. | Resource estimation for queries in large-scale distributed database system |
WO2017165914A1 (en) | 2016-03-31 | 2017-10-05 | Wisetech Global Limited | "methods and systems for database optimisation" |
CN109154937A (en) | 2016-04-29 | 2019-01-04 | 思科技术公司 | The dynamic of inquiry response is transmitted as a stream |
EP3488342A1 (en) * | 2016-07-20 | 2019-05-29 | Worldline | Multi-criteria adaptive scheduling for a market-oriented hybrid cloud infrastructure |
US20180025052A1 (en) * | 2016-07-21 | 2018-01-25 | Cisco Technology, Inc. | Infrastructure aware query optimization |
US11347796B2 (en) * | 2016-08-11 | 2022-05-31 | Sisense Ltd. | Eliminating many-to-many joins between database tables |
US10528563B2 (en) * | 2016-12-16 | 2020-01-07 | Futurewei Technologies, Inc. | Predictive table pre-joins in large scale data management system using graph community detection |
US10579619B2 (en) * | 2017-02-02 | 2020-03-03 | International Business Machines Corporation | Validation of query plan |
WO2018170276A2 (en) * | 2017-03-15 | 2018-09-20 | Fauna, Inc. | Methods and systems for a database |
US10713248B2 (en) | 2017-07-23 | 2020-07-14 | AtScale, Inc. | Query engine selection |
US10929361B2 (en) | 2017-07-23 | 2021-02-23 | AtScale, Inc. | Rule-based data source selection |
US10909116B2 (en) * | 2018-02-20 | 2021-02-02 | International Business Machines Corporation | Optimizing query processing and routing in a hybrid workload optimized database system |
US11288280B2 (en) * | 2018-10-31 | 2022-03-29 | Salesforce.Com, Inc. | Systems, methods, and apparatuses for implementing consumer data validation, matching, and merging across tenants with optional verification prompts utilizing blockchain |
US10402589B1 (en) * | 2018-12-20 | 2019-09-03 | Vijay K. Madisetti | Method and system for securing cloud storage and databases from insider threats and optimizing performance |
US11157471B2 (en) * | 2019-03-13 | 2021-10-26 | Sap Se | Generic autonomous database tuning as a service for managing backing services in cloud |
-
2019
- 2019-08-15 US US16/541,901 patent/US20200356568A1/en not_active Abandoned
- 2019-08-15 US US16/541,941 patent/US11449506B2/en active Active
- 2019-08-15 US US16/541,852 patent/US20200356559A1/en not_active Abandoned
- 2019-08-15 US US16/542,045 patent/US11216461B2/en active Active
- 2019-08-15 US US16/541,810 patent/US20200356563A1/en not_active Abandoned
-
2020
- 2020-05-08 WO PCT/US2020/032151 patent/WO2020227652A1/en active Application Filing
- 2020-05-08 WO PCT/US2020/032162 patent/WO2020227660A1/en unknown
- 2020-05-08 WO PCT/US2020/032141 patent/WO2020227645A1/en active Application Filing
- 2020-05-08 EP EP20802741.7A patent/EP3963468A4/en not_active Withdrawn
- 2020-05-08 WO PCT/US2020/032161 patent/WO2020227659A1/en active Application Filing
- 2020-05-08 WO PCT/US2020/032157 patent/WO2020227657A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2020227657A1 (en) | 2020-11-12 |
WO2020227659A1 (en) | 2020-11-12 |
EP3963468A4 (en) | 2023-01-25 |
WO2020227645A1 (en) | 2020-11-12 |
US20200356563A1 (en) | 2020-11-12 |
US11449506B2 (en) | 2022-09-20 |
US20200356873A1 (en) | 2020-11-12 |
WO2020227652A1 (en) | 2020-11-12 |
US20200356559A1 (en) | 2020-11-12 |
US20200356561A1 (en) | 2020-11-12 |
WO2020227660A1 (en) | 2020-11-12 |
EP3963468A1 (en) | 2022-03-09 |
US11216461B2 (en) | 2022-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11216461B2 (en) | Query transformations in a hybrid multi-cloud database environment per target query performance | |
KR102627690B1 (en) | Dimensional context propagation techniques for optimizing SKB query plans | |
US20230315730A1 (en) | Self-service data platform | |
US10509785B2 (en) | Policy-driven data manipulation in time-series database systems | |
US10754877B2 (en) | System and method for providing big data analytics on dynamically-changing data models | |
US10289718B2 (en) | Partition access method for query optimization | |
US8978034B1 (en) | System for dynamic batching at varying granularities using micro-batching to achieve both near real-time and batch processing characteristics | |
US10042887B2 (en) | Query optimization with zone map selectivity modeling | |
US20130198165A1 (en) | Generating statistical views in a database system | |
US10909114B1 (en) | Predicting partitions of a database table for processing a database query | |
CA2934280A1 (en) | Long string pattern matching of aggregated account data | |
US8898136B1 (en) | System for categorizing database statements for performance tuning | |
US11244007B2 (en) | Automatic adaption of a search configuration | |
US11947537B1 (en) | Automatic index management for a non-relational database | |
CN115248815A (en) | Predictive query processing | |
WO2021034329A1 (en) | Data set signatures for data impact driven storage management | |
US12079207B2 (en) | Systems and methods for intelligent database report generation | |
US20240378195A1 (en) | Systems and Methods for Intelligent Database Report Generation | |
Martin | Collocation of Data in a Multi-temperate Logical Data Warehouse |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: DATAMEER, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NAWROCKE, KELLY;MCMANUS, MATT;NETTLING, MARTIN;AND OTHERS;SIGNING DATES FROM 20190508 TO 20190510;REEL/FRAME:050066/0695 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |