[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107977446A - A kind of memory grid data load method based on data partition - Google Patents

A kind of memory grid data load method based on data partition Download PDF

Info

Publication number
CN107977446A
CN107977446A CN201711309940.9A CN201711309940A CN107977446A CN 107977446 A CN107977446 A CN 107977446A CN 201711309940 A CN201711309940 A CN 201711309940A CN 107977446 A CN107977446 A CN 107977446A
Authority
CN
China
Prior art keywords
data
grid
key
loading
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711309940.9A
Other languages
Chinese (zh)
Inventor
周博
周红卫
刘延新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Run He Software Inc Co
Original Assignee
Jiangsu Run He Software Inc Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Run He Software Inc Co filed Critical Jiangsu Run He Software Inc Co
Priority to CN201711309940.9A priority Critical patent/CN107977446A/en
Publication of CN107977446A publication Critical patent/CN107977446A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Invention is related to a kind of memory grid data load method based on data partition.The distributing position of data is calculated according to grid data partition information, loading tasks are distributed for each grid node so that each node only load store is in local data, so that cluster communication cost and data migration cost are avoided, the performance of lifting data loading.In addition, only loading the key business data by user configuration in loading procedure, memory source is made full use of.

Description

A kind of memory grid data load method based on data partition
Technical field
The present invention relates to a kind of memory grid data load method based on data partition, belong to software technology field.
Background technology
With the development of network technology, there is volatile growth in data volume and network traffics, and network application is faced with The demand that Mass storage and high concurrent access.Under the storage of large-scale data and high concurrent read-write business, traditional relational There are many limitations, non-relational number in mass data storage, high concurrent access and system extension etc. for database High speed development is able to according to storehouse.Memory data grid is as a kind of distributed memory data object management middleware, internal storage data net Lattice mainly have following features:(1)Flexible storage model:Stored based on key-value, user can flexibly define value Inside the Nomenclature Composition and Structure of Complexes without data storage should be influenced, and close association is had no between the storage of data, had very high Autgmentability, have in terms of mass data is handled and have great advantage.Meanwhile memory data grid supports distributed queue, distribution The data structures such as formula set, further improve the flexibility of its data storage, can largely adapt to existing data Storage demand;(2)The cluster of dynamic expansion:The onrelevant stored due to the equity and data of each node of memory data grid Property, the node for the additions and deletions memory data grid that user can be elastic.Meanwhile the efficient data back mechanism of each node is its dynamic Extension provides high reliability;(3)Efficient memory storage:Memory data grid is stored data in memory, avoids magnetic The low bottleneck of disk I O access performance, the performance that the storage of significant increase data accesses, can preferably tackle under big data background High concurrent data access request.
Memory data grid still suffers from some shortcomings in data loading, the fusion of interior external memory, data access interface etc., main Show the following aspects:(1)Data loading performance is low:Memory data grid is based on memory storage, is opened for the first time in system , it is necessary to which data are loaded into memory data grid from relevant database when dynamic.When tackling large-scale data scene, how Efficiently critical data is loaded into the memory data grid of multinode composition, many memory data grid schemes should in reply Still lack complete scheme during problem, loading performance is integrally relatively low;(2)Interior external memory fusion is not perfect enough:Memory data grid with Relevant database is relatively independent, and the consistency problem of both data is the key of system application.The internal storage data of mainstream at present Mesh products are read data in memory data grid from disk by Read-Through modes, while pass through Write- Data are persisted in disk by Through/Write-Behind modes from memory data grid by either synchronously or asynchronously mode (JI S, WANG W, YE C, et al. Constructing a data accessing layer for in-memory data grid; proceedings of the Proceedings of the Fourth Asia-Pacific Symposium on Internetware, F, 2012 [C]. ACM.), or by application program oneself come safeguard caching and The data consistency of database(GAUR N, KAPLINGER T E, BHOGAL K S, et al. Dynamic map template discovery and map creation [M]. Google Patents. 2013.).But when cannot feel When knowing that the third-party application of memory data grid directly updates backstage disk database, due to memory data grid and using journey Sequence can not perception data change, easily there are out-of-date cache problem(GWERTZMAN J, SELTZER M I. World Wide Web Cache Consistency; proceedings of the USENIX annual technical conference, F, 1996 [C].), interior external memory fusion needs further perfect.(3)Lack unified data access to connect Mouthful:Memory data grid has very high flexibility in terms of data storage, but flexible data storage result in data access and connect The disunity of mouth.The memory data grid product of mainstream lacks the compatibility to Legacy System mostly at present, particularly to structuring The compatibility of query language.Although mainstream memory data grid product support part SQL syntax, due to the SQL languages of its support Method still imperfection, there is some difference with traditional database development pattern for data access mode, this is largely limited The further development of memory data grid, developer need to be directed to specific business and specific memory data grid product Secondary development is done, adds extra exploitation cost and learning cost.
The content of the invention
The purpose of the present invention:By extending the support to JDBC interfaces, SQL language and the improvement of SQL request process flow, Further lift the Data Access Integration ability of memory data grid.
The principle of the present invention:Loaded in parallel scheme based on data partition information.It is automatic according to grid data partition information Loading tasks are distributed for each node, only load local data, avoid cluster communication and data migration cost, lift data response rate Can, in addition, by the filtering to non-critical data, lift memory usage.
In memory data grid initial start-up, it is necessary to by data from being loaded into memory data grid from the background, so that using The data of memory data grid, the performance of lifting system can directly be accessed.Data loading is one in system initialization process Item important process, efficiently realizes that a data loading scheme can effectively lift the performance of grid initialization.
The problem of one complete data loading scheme needs to consider following aspects:(1)Which data loaded:From the background The different data of each business are stored in database, should only loading be moved in data load process in memory data grid How service related data, effectively define critical data, filters out incoherent data, is that resource effect is lifted in loading procedure One of key of rate and system performance;(2)Data model translation:Memory data grid stores data in the form of key/value, has Beneficial to the extending transversely of grid node, wherein key and value are data object, and in relevant database, data are with table Form stored, therefore to say that data are loaded into memory data grid from relevant database, first have to implementation relation type Automatic conversion of the data model to key/value data models;(3)How the loading tasks of grid node are distributed:Internal storage data Grid is a distributed system that can be extending transversely, and the data of each node storage do not have obvious relevance, are loaded in data During, how for each node distribute loading tasks, efficiently utilize the cpu resource and memory source of each node so that whole The data loading efficiency highest of a grid, is another key of data loading.
The first step, data model translation.
Invention realizes the data model automatic switching method in data load process, mainly includes the generation of key, The generation of Value and the foundation of index, in the generation of key, mainly consider two problems:(1)The uniqueness of key:Due to interior Deposit data grid data generally exists with the structure of Map, and the uniqueness of key is the necessary condition of Map storage data;(2)Key's Ease for use:In memory data grid, the cryptographic Hash of key can be calculated so that it is determined that the specific distribution of data, a rational key are needed Ensure the easy calculating of cryptographic Hash, meanwhile, the digital independent of memory data grid is all based on key, if the generation of key is excessively Complexity, unnecessary performance cost can be all brought in digital independent.In order to ensure the uniqueness of key and ease for use, in invention Key exist all in the form of character string, for the single major key situation in database, directly choose the major key in database table As the key of data object, for more major key situations in database table, multiple row major key is accorded with into combination by special interval and is spelled Connect, generate single key, be each using major key autoincrement mode to the situation for not having clearly to specify major key in database table One integer of value object maintenances is from variable is increased as corresponding key values.
In the generation of value, what emphasis need to consider is the versatility of value.Due to different tables in relevant database Data contain different structures, how different table data are uniformly generated to the value in Map, be emphasis consider the problem of.Hair It is bright by a database table Mapping and Converting into a Map, one group of key/ in the corresponding Map of a record in database table Value key-value pairs.Metamessage by database table is each one data object class of table dynamic generation, the type of attribute in class Corresponded with the type of relevant database attribute, all data object classes contain same parent, in this way, being database table Each object instance of every record generation, in the Map that same structure can be stored.
Index information in relevant database is to provide the correct means of efficient data access.Invention is in order to by relationship type The index information of database is synchronized to memory data grid, specially devises index manager.Internal storage data is synchronized in data Before grid, the metamessage for beginning with database table creates corresponding distribution Map, while can extract database column index Information, adds corresponding index in Map.In the query process based on index, adding the Map of index can be arrived with immediate addressing Corresponding value objects.
Second step, data homogenization loading.
When original single node loading scheme is applied to memory data grid cluster, due to there was only host node in loading number According to, and data are re-distributed to respectively from node, single node bottleneck is so readily formed, and all tasks are all in host node Perform, cause the non-uniform situation of the utilization of resources, therefore, invention design realizes data homogenization loading scheme.
In data homogenize loading scheme, memory data grid elects master nodes first, and master nodes are read Relevant database metamessage is taken, and is injected into memory data grid, each slave nodes read metamessage and turn according to model The method of changing automatically generates local class, then master nodes reading database data amount information, and by the uniform burst of data, by number Corresponding according to burst information and node, master nodes distribute each node loading tasks, and each node loads respective data fragmentation Data, complete data homogenization loading scheme.
3rd step, service template design
Loaded in parallel scheme based on data partition information can be by the configuration of key business template, to load and key business phase The data of pass, so that data are loaded with the flexibility of more height, while the performance of lifting system loading and the profit of memory With efficiency, invention mainly configures key business data from following three granularity.
(1)According to database table name:By configuration file configuration and the relevant database table name of key business, in data plus During load, only reading can directly be filtered with the relevant database table of key business, incoherent table, will not be loaded into memory In data grids.
(2)According to field type:In relevant database, the field containing many specific types, for example Blob, Clob Deng typical Blob is a pictures or an audio files, due to their size, must be used in relevant database Special mode is handled, and Clob fields are generally big character object, such as XML document.Due to these fields spy of itself Different attribute, is loaded into meeting extreme influence performance after memory.In the design of service template, user can set the mistake of type field Filter, filtering out these influences the data of memory storage efficiency and access performance, lifting system performance.
(3)Filtered according to SQL:User can customize data query sentence, and in data load process, loading scheme can be certainly It is dynamic to load specified data, rather than global data according to user-defined query statement.
4th step, the loaded in parallel based on data partition information.
To solve the deficiency that memory data grid available data homogenizes loading scheme, the property of data loading is further lifted The loaded in parallel scheme based on data partition information can be proposed with the service efficiency of memory, invention, its feature is mainly manifested in: (1)Master nodes efficiently distribute loading tasks by grid data partition information for each grid node so that each node only adds Carry and be stored in local data after passing through uniformity Hash calculation, so as to avoid data from migrating cost and cluster communication in each node Cost, lifts data loading performance;(2)Only loading with the relevant data of key business, key business can from advance it is designed Service template configures.
In order to reduce cost on network communication and data migration cost in data homogenization loading scheme, invention proposition is based on The loaded in parallel scheme of data partition information.In the loaded in parallel scheme based on data partition information, start memory number first Go out master nodes according to grid cluster, and by cluster election, master nodes obtain distributed lock;Master nodes read relation The major key information of type database data, according to grid data partition information, major key information is distributed into different set makes total According to being stored in local node after loading, and release profile formula is locked;Each grid node obtains database metadata, and dynamic generation Local class;All grid nodes concurrently obtain own node from distributed data structure to be needed to load the major key information of data, Load logic is generated according to major key information, concurrently loads data.
The present invention has the following advantages that compared with prior art:The distribution position of data is calculated according to grid data partition information Put, loading tasks are distributed for each grid node so that each node only load store is in local data, so as to avoid cluster communication Cost and data migration cost, the performance of lifting data loading.In addition, the crucial industry by user configuration is only loaded in loading procedure Business data, make full use of memory source.
Brief description of the drawings
Fig. 1 is memory data grid system architecture.
Embodiment
Below in conjunction with specific embodiments and the drawings, the present invention is described in detail, as making for present invention method With environment, as shown in Figure 1.
By to the loaded in parallel scheme based on data partition information, delta data catch mechanism and Data Access Integration skill The research of art, invention realize the memory data grid system towards Cache-Aside patterns on the basis of memory data grid, Its system architecture diagram and application scenarios.System architecture illustrates the module composition and associated component of internal system, and each module Request under specific business scenario accesses flow relation and data flow relation, and the operation mechanism of system is described more detail below.
First, each node of memory data grid is started, after a node meets user-defined cluster condition, cluster is opened Begin loading data, and under cluster environment, the data loading scheme of acquiescence is the loaded in parallel scheme based on data partition information, should Loading tasks can be distributed for each node automatically under scheme, the data of loading can be distributed to local section according to uniformity hash algorithm Point, without extra cluster communication and data migration cost.Meanwhile meeting automatic fitration is fallen and user-defined industry in loading procedure It is engaged in unrelated data, makes full use of memory source, the performance of the loading of lifting system initialization to the full extent.
Then, start CDC components, capture database data change in real time, and change data are transmitted to internal storage data net Lattice, keep memory data grid data and the data consistency of backstage relevant database.
Finally, JDBC drivings are replaced database-driven in original operation system by user, start application server, user During the SQL separators that request is driven by the JDBC of internal storage data webmaster, if the request memory data grid is supported and belongs to use The customized business in family, then transfer to memory data grid to perform, otherwise directly hand to by SQL compilers and enforcement engine etc. Database performs.The SQL request for transferring to memory data grid to perform, if so that grid data changes, by synchronization or Person's asynchronous system is persisted to background data base;And the SQL request for transferring to database to perform, if so that user-defined data are sent out Change is given birth to, then CDC components can capture the data and change and be synchronized to memory data grid, interior under Cache-Aside patterns Deposit data grid and database are a kind of mechanism of bi-directional synchronization, the further perfect integration program of interior external memory.
The system architecture is a kind of framework of interior external memory fusion, make use of to a certain extent memory data grid it is high simultaneously Memory data grid, can preferably be fitted in existing system by volatility, be the once new trial of memory data grid system. The specific implementation of each key technology is discussed below.One complete data loading flow includes metadata loading, model Conversion, index loading and data loading.The detailed process of data loading is described in detail below.
Metadata loads, i.e. the realization of loadMetaData () method.Database is connected first, and passes through JDBC's The getMetaData methods of ResultSet classes obtain the metamessage of database data table, and metamessage then is converted to work of increasing income Have the BeanGenerator objects in cglib.Cglib is powerful a, high-performance, and the Code of high quality generates class libraries, it Java class can be extended in the runtime and realizes Java interfaces, many well-known Open-Source Tools such as Spring, Hibernate etc. The dynamic generation of bytecode is realized with it.While metamessage is converted to BeanGenerator objects, metamessage is deposited Store up in TableMeta classes, to store the extraneous informations such as major key, BeanGenertor is mainly used for class in following model conversion Dynamic generation, and TableMeta is mainly used for providing the major key information needed in data load process, table name, row name etc. additionally Information.
The realization of model conversion, that is, populateBeanMap () method.Model conversion process is mainly loaded using metadata The BeanMap classes in metamessage and Cglib in the BeanGenerator objects and TableMetaData objects that obtain afterwards, come Dynamic construction class memory data grid Object, the parent of such all classes of data entities.Index loading is loading of databases The index information of table.Since memory data grid realizes index manager, in data load process, can be connect by JDBC The index information of mouth extraction database, corresponding index is added in Map, improves the performance of Value inquiries.
After the completion of metadata loading, model conversion and index loading all, the loading of database data is finally carried out, in base In the loaded in parallel scheme of data partition information, metadata loading, model conversion and index are loaded as described above, specific In data load process, each node is directed to the data of database, and judging the data, whether the storage is to local, if it is, plus Carry, so that the data for ensureing to load, which are all storages, arrives local data.Due to being stored in the data routing table of memory data grid The partition information of cluster information and each node, can be easy to subregion where judging it, so as to determine whether it according to data Id Place node.

Claims (1)

1. method characteristic is to realize that step is as follows:
The first step, data model translation:The main generation for including key, the generation of Value and the foundation of index, key is with character The form of string exists, and for the single major key situation in database, directly chooses the major key in database table as data object Key, for more major key situations in database table, multiple row major key is accorded with by special interval combined and spliced, generated single Key, is each value object maintenances one using major key autoincrement mode to the situation for not having clearly to specify major key in database table A integer is from variable is increased as corresponding key values;
Second step, data homogenization loading:Memory data grid elects master nodes first, and master nodes read relation Type database metamessage, and memory data grid is injected into, each slave nodes read metamessage and according to model conversion methods Local class is automatically generated, then master nodes reading database data amount information, and by the uniform burst of data, by data fragmentation Information and node are corresponding, and master nodes distribute each node loading tasks, and each node loads the data of respective data fragmentation, complete Loading scheme is homogenized into data;
3rd step, service template design:Loaded in parallel scheme based on data partition information can matching somebody with somebody by key business template Put, come load with the relevant data of key business so that data are loaded with the flexibility of more height, while lifting system The performance of loading and the utilization ratio of memory, invention mainly come from three granularities such as database table name, field type, SQL filterings Configure key business data;
4th step, the loaded in parallel based on data partition information:Start memory data grid cluster first, and gone out by cluster election Master nodes, master nodes obtain distributed lock;Master nodes read the major key information of relevant database data, root According to grid data partition information, major key information is distributed into different set so that being stored in local node after data loading, and Release profile formula is locked;Each grid node obtains database metadata, and dynamic generation local class;All grid nodes concurrently from Own node is obtained in distributed data structure to be needed to load the major key information of data, and load logic is generated according to major key information, Concurrently load data.
CN201711309940.9A 2017-12-11 2017-12-11 A kind of memory grid data load method based on data partition Pending CN107977446A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711309940.9A CN107977446A (en) 2017-12-11 2017-12-11 A kind of memory grid data load method based on data partition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711309940.9A CN107977446A (en) 2017-12-11 2017-12-11 A kind of memory grid data load method based on data partition

Publications (1)

Publication Number Publication Date
CN107977446A true CN107977446A (en) 2018-05-01

Family

ID=62009940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711309940.9A Pending CN107977446A (en) 2017-12-11 2017-12-11 A kind of memory grid data load method based on data partition

Country Status (1)

Country Link
CN (1) CN107977446A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108733781A (en) * 2018-05-08 2018-11-02 安徽工业大学 The cluster temporal data indexing means calculated based on memory
CN111523002A (en) * 2020-04-23 2020-08-11 中国农业银行股份有限公司 Main key distribution method, device, server and storage medium
CN111984696A (en) * 2020-07-23 2020-11-24 深圳市赢时胜信息技术股份有限公司 Novel database and method
CN112667859A (en) * 2020-12-30 2021-04-16 北京久其软件股份有限公司 Data processing method and device based on memory
CN112698957A (en) * 2021-02-02 2021-04-23 北京东方通科技股份有限公司 Data processing method and system based on memory data grid
CN114398105A (en) * 2022-01-20 2022-04-26 北京奥星贝斯科技有限公司 Method and device for loading data from distributed database by computing engine
CN115096291A (en) * 2022-07-06 2022-09-23 上海电气集团智能交通科技有限公司 Electronic map dynamic splicing method based on magnetic nail track
CN117435594A (en) * 2023-12-18 2024-01-23 天津南大通用数据技术股份有限公司 Optimization method for distributed database distribution key

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108733781A (en) * 2018-05-08 2018-11-02 安徽工业大学 The cluster temporal data indexing means calculated based on memory
CN108733781B (en) * 2018-05-08 2021-10-29 安徽工业大学 Cluster temporal data indexing method based on memory calculation
CN111523002B (en) * 2020-04-23 2023-06-09 中国农业银行股份有限公司 Main key distribution method, device, server and storage medium
CN111523002A (en) * 2020-04-23 2020-08-11 中国农业银行股份有限公司 Main key distribution method, device, server and storage medium
CN111984696A (en) * 2020-07-23 2020-11-24 深圳市赢时胜信息技术股份有限公司 Novel database and method
CN111984696B (en) * 2020-07-23 2023-11-10 深圳市赢时胜信息技术股份有限公司 Novel database and method
CN112667859A (en) * 2020-12-30 2021-04-16 北京久其软件股份有限公司 Data processing method and device based on memory
CN112698957A (en) * 2021-02-02 2021-04-23 北京东方通科技股份有限公司 Data processing method and system based on memory data grid
CN112698957B (en) * 2021-02-02 2024-02-20 北京东方通科技股份有限公司 Data processing method and system based on memory data grid
CN114398105A (en) * 2022-01-20 2022-04-26 北京奥星贝斯科技有限公司 Method and device for loading data from distributed database by computing engine
CN115096291A (en) * 2022-07-06 2022-09-23 上海电气集团智能交通科技有限公司 Electronic map dynamic splicing method based on magnetic nail track
CN117435594A (en) * 2023-12-18 2024-01-23 天津南大通用数据技术股份有限公司 Optimization method for distributed database distribution key
CN117435594B (en) * 2023-12-18 2024-04-16 天津南大通用数据技术股份有限公司 Optimization method for distributed database distribution key

Similar Documents

Publication Publication Date Title
CN107977446A (en) A kind of memory grid data load method based on data partition
US7984043B1 (en) System and method for distributed query processing using configuration-independent query plans
US11893046B2 (en) Method and apparatus for implementing a set of integrated data systems
KR102177190B1 (en) Managing data with flexible schema
US11093459B2 (en) Parallel and efficient technique for building and maintaining a main memory, CSR-based graph index in an RDBMS
US5764977A (en) Distributed database architecture and distributed database management system for open network evolution
JP2019515377A (en) Distributed Datastore Versioned Hierarchical Data Structure
US20010016843A1 (en) Method and apparatus for accessing data
CA3137857A1 (en) Multi-language fusion query method and multi-model database system
US10417257B2 (en) Non-blocking database table alteration
US6941309B2 (en) Object integrated management system
US12135707B2 (en) Maintaining data separation for data consolidated from multiple data artifact instances
US11886411B2 (en) Data storage using roaring binary-tree format
US20240004853A1 (en) Virtual data source manager of data virtualization-based architecture
US11960616B2 (en) Virtual data sources of data virtualization-based architecture
CN105468793A (en) Automated management method for simulation model data
WO2023151239A1 (en) Micro-service creation method and related device
CN113468149B (en) Data model development platform
US11263026B2 (en) Software plugins of data virtualization-based architecture
CN117056305A (en) Construction method, model, database system and medium of multisource isomorphic database
US20220374424A1 (en) Join queries in data virtualization-based architecture
Cobbs Persistence Programming: Are we doing this right?
CN117828127B (en) Tree-level cluster user management method based on semi-structured storage
Gilon et al. Compact deterministic distributed dictionaries
Kpekpassi et al. PREPRINT NoSQL databases: A survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20180501

WD01 Invention patent application deemed withdrawn after publication