[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117519945A - Database resource scheduling method, device and system - Google Patents

Database resource scheduling method, device and system Download PDF

Info

Publication number
CN117519945A
CN117519945A CN202311675589.0A CN202311675589A CN117519945A CN 117519945 A CN117519945 A CN 117519945A CN 202311675589 A CN202311675589 A CN 202311675589A CN 117519945 A CN117519945 A CN 117519945A
Authority
CN
China
Prior art keywords
request
node
data block
database
background process
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311675589.0A
Other languages
Chinese (zh)
Inventor
李勇
张震阳
梁继良
黄志军
赵宗鹏
张争
陈凤娟
武仲琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Uxsino Software Co ltd
Original Assignee
Beijing Uxsino Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Uxsino Software Co ltd filed Critical Beijing Uxsino Software Co ltd
Priority to CN202311675589.0A priority Critical patent/CN117519945A/en
Publication of CN117519945A publication Critical patent/CN117519945A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to the technical field of electronic information, in particular to a database resource scheduling method, a device and a system, wherein the method comprises the following steps: and receiving a data block request sent by a request background process, wherein the request carries a data block identifier. According to the database resource scheduling method, the resource scheduling modes corresponding to the request quantity, the metadata information corresponding to the data blocks in the request and the request type are selected, so that the data blocks are accurately and rapidly shared, the difficulty and the workload of data scheduling management are reduced, the sharing can be realized through the transmission of a high-speed intranet, the control is realized in a mode conforming to the concurrency control rule of the global memory of the database, the sharing and the fusion of the memory are realized, the IO of storage can be relieved by utilizing the network IO, and the system execution efficiency under the shared storage cluster database system architecture can be greatly improved.

Description

Database resource scheduling method, device and system
Technical Field
The present invention relates to the field of electronic information technologies, and in particular, to a method, an apparatus, and a system for scheduling database resources.
Background
In the shared storage database cluster architecture, each database node supports all data of read-write operation, and the read-write operation on any node accords with global ACID characteristics, rather than common one-write-multiple-read, data partition and other distributed architectures. In a shared storage database cluster architecture, as all database nodes operate by sharing data, confusion is likely to occur and the data processing speed is required to be improved if the data in the database memory on each node is shared.
Disclosure of Invention
In view of the above, the present invention aims to provide a method, a device and a system for scheduling database resources.
In a first aspect, an embodiment of the present invention provides a method for scheduling database resources, where the method includes:
receiving a data block request sent by a request background process, wherein the request carries a data block identifier;
acquiring metadata information corresponding to the data block according to the data block identifier;
and executing a corresponding resource scheduling mode according to the metadata information so as to schedule the data block to the request background process.
With reference to the first aspect, the metadata information includes a shared lock holding node list, an exclusive lock holding node number and a previous modified node number; the request type comprises an exclusive lock and a shared lock;
And executing a corresponding resource scheduling mode according to the type of the metadata information, wherein the step comprises the following steps:
if the metadata information is empty, generating a disk reading instruction and returning the disk reading instruction to the request background process so that the request background process automatically reads a disk;
if any one of the shared lock holding list, the exclusive lock holding node number and the previous modified node number is valid, acquiring a request type corresponding to the request;
and determining and executing a corresponding resource scheduling mode according to the metadata information and the request type.
With reference to the first aspect, the step of generating a disc reading instruction and returning the disc reading instruction to the request background process further includes:
acquiring a request type corresponding to the request;
if the request type is a shared lock, adding the node number of the request background process to the shared lock holding node list;
and if the request type is an exclusive lock, adding the node number of the request background process to the node number held by the exclusive lock and the node number modified in the last time.
With reference to the first aspect, metadata information of the data block is: the shared lock holding list is valid, and the exclusive lock holding node number and the last modified node number are empty;
According to the metadata information and the request type, determining and executing a corresponding resource scheduling mode, wherein the method comprises the following steps:
if the request type is a shared lock, sending a shared lock mode data block forwarding instruction to any database node in the shared lock holding node list, so that the database node sends the data block to the request background process, and adding the node number of the request background process to the shared lock holding node list;
if the request type is exclusive lock, sending an exclusive lock mode data block forwarding instruction to any database node in the shared lock holding node list, so that the database node sends the data block to the request background process;
for each target database node in the shared lock holding node list, erasing a shared lock identifier of the target database node;
and simultaneously, clearing the shared lock holding node list, and adding the node number of the request background process to the exclusive lock holding node number and the previous modified node number.
With reference to the first aspect, metadata information of the data block is: the exclusive lock holds a node number and the previous modified node number valid, and the shared lock holds a list empty;
And determining and executing a corresponding resource scheduling mode according to the metadata information and the request type, wherein the method comprises the following steps of:
if the request type is a shared lock, sending a shared lock mode forwarding data block instruction to a database node in the exclusive lock holding node so that the database node sends a data block to the request background process; meanwhile, the node number of the request background process is added to the shared lock holding list, and the exclusive lock holding node number is emptied;
if the request type is exclusive lock, sending a shared lock mode forwarding data block instruction to a database node in the exclusive lock holding node so that the database node sends a data block to the request background process; and simultaneously, adding the node number of the request background process to the exclusive lock holding node number and the previous modification node number.
With reference to the first aspect, metadata information of the data block is: the shared lock holds and the previous modified node is valid, and the exclusive lock holds a node number empty;
and determining and executing a corresponding resource scheduling mode according to the metadata information and the request type, wherein the method comprises the following steps of:
If the request type is a shared lock, sending a shared lock mode forwarding data instruction to the node in the previous modification node so that the node in the previous modification node sends a data block to the request background process, and adding the node number of the request background process to the shared lock holding list;
and if the request type is exclusive lock, sending an exclusive lock mode forwarding data block instruction to the node in the previous modification node so that the node in the previous modification node can send the data block to the request background process, and simultaneously clearing the shared lock holding list and recording node numbers of the request background process in the exclusive lock holding node and the previous modification node.
With reference to the first aspect, after the step of receiving the data block request sent by the request background process, the method further includes:
and under the condition that the number of the requests is a plurality of, scheduling the resources according to a preset concurrent scheduling rule.
In a second aspect, the present application further provides a database resource scheduling apparatus, where the apparatus includes:
the data block request receiving module is used for receiving a data block request sent by a request background process, wherein the request carries a data block identifier;
The acquisition module is used for acquiring metadata information corresponding to the data block according to the data block identifier;
and the scheduling module is used for executing a corresponding resource scheduling mode according to the metadata information so as to schedule the data block to the request background process.
In a third aspect, the present application further provides a database resource scheduling system, the system including: and the global control module is deployed on each database node of the multi-node shared storage database cluster and is used for receiving and processing the data block requests. The global control module on each node manages a subset of the global resources and distributes global data block resource management over the nodes according to the consistent hash rules. After receiving the request, the global control module performs data block lock management on the database node and sends out a scheduling control instruction; the scheduling control instruction includes: the method comprises a disk reading instruction, a shared lock mode forwarding data block instruction, an exclusive lock mode forwarding data block instruction and a shared lock canceling instruction.
With reference to the third aspect, the system further includes: the concurrency scheduling module and the request processing module;
the concurrent scheduling module is deployed on each database node of the multi-node shared storage database cluster and is used for receiving the scheduling instruction of the global control module and executing the scheduling control of the data block resources and the authority;
The request processing module is called by a database background process/thread and is used for sending a data block request to the global control module and processing a reply result.
Preferably, for a multi-task and multi-concurrency operation scenario of the database, if a plurality of database background processes/threads on a database node request the same data block at the same time, the request processing module only sends a request message and waits for a reply result, and after the request flow is completed, the plurality of database background processes/threads requesting the same data block can share the reply result.
The embodiment of the invention has the following beneficial effects: according to the database resource scheduling method, the resource scheduling modes corresponding to the request quantity, the metadata information corresponding to the data blocks in the request and the request type are selected, so that the data blocks are accurately and rapidly shared, the difficulty and the workload of data scheduling management are reduced, the shared data blocks can be shared through the transmission of a high-speed intranet, the control is performed in a streaming control mode conforming to the concurrency control rule of the global memory of the database, the sharing and the fusion of the memory are realized, and the system execution efficiency under the system architecture of the shared storage cluster database can be greatly improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are some embodiments of the invention and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a database resource scheduling method according to an embodiment of the present invention;
fig. 2 is a schematic message flow diagram in a scenario where a request type is exclusive lock, provided in the embodiment of the present invention, when a shared lock holding node list, a previous modified node number is valid, and an exclusive lock holding node number is empty;
FIG. 3 is a schematic diagram of a request background process executing message flow in a scenario where a request type is exclusive lock, provided in the embodiment of the present invention, when a shared lock holding node list, a previous modified node number is valid, and an exclusive lock holding node number is empty;
FIG. 4 is a schematic diagram of a database resource scheduling system according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a database resource scheduling device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order to facilitate understanding of the present embodiment, technical terms designed in the present application will be briefly described below.
The data block, the database management system will divide all data of the data table into equal size data blocks, so as to facilitate management and optimization of data storage and access. The database management system uses the data block as a minimum operation and disk IO unit for both read and write operations and IO for data.
The consistency hash algorithm provides that in a dynamic change Cache environment, whether the hash algorithm is good or not is judged: balance (Balance), monotonicity (monosonicity), dispersion (Spread), and Load (Load).
After technical terms related to the application are introduced, application scenes and design ideas of the embodiment of the application are briefly introduced.
In a traditional single-machine database system, a commonly adopted memory concurrency control model is to establish a large-capacity shared memory, a plurality of database background business processes operate the shared memory in a concurrency manner, and a memory lock mechanism is used for coordinating concurrency control among the background business processes; common write-once read-many clusters also use the same database memory management approach as traditional stand-alone databases. However, in the shared storage cluster database system, each database node may have a data memory, and the operation of the database background business process on each node on the shared data resource is not only limited by concurrency control of other local business processes, but also needs to accord with concurrency control of globally consistent data block resources, which obviously increases the complexity of memory consistency management of the shared storage cluster database.
The embodiment of the application provides a database resource scheduling method, a device and a system, wherein the database resource scheduling method is applied to a global control module in a database resource scheduling system under a multi-node shared storage database cluster, the global control module is deployed on each database node in the multi-node shared storage database cluster in the database resource scheduling system, and after a data block request is received, the global control module on any one database node executes a corresponding resource scheduling mode according to metadata information and request types of managed data blocks so as to accurately and efficiently schedule data and facilitate management of the multi-node shared storage database.
Example 1
The application provides a database resource scheduling method, which is shown in fig. 1, and includes:
s110, receiving a data block request sent by a request background process, wherein the request carries a data block identifier.
S120, acquiring metadata information corresponding to the data block according to the data block identifier.
And S130, executing a corresponding resource scheduling mode according to the metadata information so as to schedule the data block to the request background process.
According to the data block resource scheduling method, the data scheduling is accurately and efficiently carried out through determining and executing the response resource scheduling mode on the request quantity, the metadata information corresponding to the data blocks in the request and the request type, so that the management of the multi-node shared storage database is facilitated. Sharing is realized through transmission of a high-speed intranet, and storage IO is relieved by utilizing network IO, so that the system execution efficiency under the shared storage cluster database system architecture can be greatly improved.
The request background process for sending the data block request carries out consistent hash calculation based on the data block identification to obtain a data block management node corresponding to the data block, and then sends the request to a global control module corresponding to the database node.
In combination with the first aspect, the metadata information includes a shared lock holding node list, an exclusive lock holding node number, and a last modified node number.
The request types include exclusive locks, shared locks.
Step S130 is a step of executing a corresponding resource scheduling mode according to the type of the metadata information, and includes:
s131, if the metadata information is empty, generating a disk reading instruction and returning the disk reading instruction to the request background process so as to enable the request background process to read the disk by itself.
S135, if any one of the shared lock holding list, the exclusive lock holding node number and the previous modified node number is valid, acquiring a request type corresponding to the request;
s136, determining and executing a corresponding resource scheduling mode according to the metadata information and the request type.
Under the condition that the number of received requests is unique, different resource scheduling modes are executed according to the metadata information of the data blocks in the requests, and when the metadata is empty, steps S131-S134 are executed. When any of the metadata is valid, i.e., the metadata is not empty, steps S135-S136 are performed.
Step S131 generates a disk reading instruction and returns the disk reading instruction to the request background process so that the request background process can read the disk by itself. The global control module generates and sends a disk reading instruction to a request background process which sends a request, so that the request background process which sends the request reads disks by itself.
With reference to the first aspect, step S131 further includes:
s132, obtaining a request type corresponding to the request.
S133, if the request type is the shared lock, adding the node number of the background process to the shared lock holding node list.
S134, if the request type is exclusive lock, the node number of the background process is added to the node number held by the exclusive lock and the node number modified in the last time.
After the background process sending the request reads the disk by itself, the metadata information of the data block in the request is updated according to the request type.
In this embodiment, the shared lock holding node list is labeled "A", the exclusive lock holding node is labeled "B", and the last modified node number is labeled "C".
When any of the metadata is valid, i.e., the metadata is not empty, including the following cases:
(1) A is active and B, C is null.
(2) A is empty and B, C is active.
(3) A, C is active and B is null.
In the above three cases, steps S135 to S136 are performed.
With reference to the first aspect, metadata information of the data block is: the shared lock holding list is valid, and the exclusive lock holding node number and the last modified node number are empty, corresponding to case (1) a being valid, B, C being empty. At this time, step S135 determines and executes a corresponding resource scheduling mode according to the metadata information and the request type, including:
s1350, if the request type is a shared lock, sending a shared lock mode data block forwarding instruction to any database node in the shared lock holding node list, so that the database node sends the data block to a request background process, and adds the node number of the request background process to the shared lock holding node list.
S1351, if the request type is exclusive lock, sending an exclusive lock mode data block forwarding instruction to any database node in the shared lock holding node list, so that the database node sends the data block to a request background process.
S1352, for each target database node in the shared lock holding node list, erasing the shared lock identification of the target database node;
and simultaneously, clearing the shared lock holding node list, and adding the node number of the background process to the exclusive lock holding node number and the previous modified node number.
That is, when the metadata information is (1) a and is valid and B, C is empty, different resource scheduling modes are executed according to the request type.
Specifically, when the request type is a shared lock, step S1351 selects any one of the database nodes in the shared lock holding node list (i.e., a) as the first node, and the global control module controls the background process corresponding to the first node to send the data block to the background process sending the request, and adds the node number of the background process sending the request to a.
When the request type is exclusive lock, step S1352 selects any one of the database nodes in the shared lock holding node list (i.e., a) as a second node, where the second node may be the same node as the first node or may not be the same node; and then controlling the second node to send the data block to a background process for sending the request.
After that, since the request type is exclusive lock, after step S1352, step S1353 controls each background process corresponding to each target node in a to release the shared lock, where the target node is a database node other than the second database node, and simultaneously clears a, and adds the node number corresponding to the background process sending the request to B and C.
With reference to the first aspect, the metadata information is: the exclusive lock holds the node number and the last modified node number valid, and the shared lock holds the list empty. That is, in the case that the case (2) a is empty, B and C are all valid, step S135 determines and executes a corresponding resource scheduling mode according to the metadata information and the request type, including:
s1353, if the request type is shared lock, sending a shared lock mode forwarding data block instruction to a database node in the exclusive lock holding node so that the database node sends the data block to a request background process; meanwhile, the node number of the background process is added to the shared lock holding list, and the exclusive lock holding node number is cleared.
S1354, if the request type is exclusive lock, sending a shared lock mode forwarding data block instruction to a database node in the exclusive lock holding node so that the database node sends the data block to a request background process; meanwhile, the node number of the request background process is added to the exclusive lock holding node number and the previous modification node number.
And when the request type is the shared lock, sending a shared lock mode forwarding data block instruction to a database node in the exclusive lock holding node so that the database node sends the data block to the request background process for sharing the data block, so that step S1354 demotes the data block lock corresponding to the third node number into the shared lock, and then adds the node number of the request background process sending the request to the A, clears the B and keeps the C unchanged.
When the request type is exclusive lock, the database node in the exclusive lock holding node shares the data block with the request background process sending the request, and the request type is exclusive lock, so that the data block is finally exclusive by the background process sending the request. Step S1356 updates the lock level of the data block corresponding to the database node in the third exclusive lock holding node to be null, and eliminates the dirty page identifier of the data block. After the request background process that sends the request receives the data block and marks the data block as dirty, step S1357 the global control module records the node number corresponding to the background process that sends the request in B, C.
With reference to the first aspect, metadata information of the data block is: the shared lock holding and previous modifying nodes are valid, and the exclusive lock holding node number is null, i.e. in the case that (3) A, C is valid and B is null, step S135 determines and executes a corresponding resource scheduling mode according to the metadata information and the request type, including:
s1358, if the request type is the shared lock, sending a shared lock mode forwarding data instruction to the node in the previous modification node, so that the node in the previous modification node sends the data block to the request background process, and meanwhile, adding the node number of the request background process to the shared lock holding list.
S13591, if the request type is exclusive lock, transmitting an exclusive lock mode forwarding data block instruction to the node in the previous modification node, so that the node in the previous modification node transmits the data block to the request background process, and simultaneously, clearing the shared lock holding list and recording the node number of the request background process in the exclusive lock holding node and the previous modification node.
And after the database background receives the data block and marks the data block as dirty, aiming at each node in the shared lock holding list, controlling a request background process corresponding to the node to release the shared lock, emptying the shared lock holding list, and adding the node corresponding to the database background to the exclusive lock holding node and the previous modification node.
Under the condition that A, C is valid and B is empty, the global control module firstly controls the database node (namely the fourth node number) in C to send the data block to a request background process so as to realize data sharing, and if the request type is a shared lock, the node number of the request background process is added into A. If the request type is a single shared lock, the data block lock level of the database node (namely the fourth node number) in the C is degraded to be empty by the shared lock, the dirty page identification of the data block is eliminated, then a request background process sending the request receives the data block, and after the data block is marked dirty, the global control module sends a message to all nodes in the A to enable the nodes to release the shared lock, and the A is emptied, and the node number corresponding to the request background process sending the request is recorded in B, C.
Referring to fig. 2, fig. 2 is a schematic message flow diagram in a case where a request type is exclusive lock when A, C is valid and B is empty according to an embodiment of the present invention.
Specifically, the diagram 201 requests a background process (the requesting background process is any background process that sends a data block request), firstly calculates a management node of a data block corresponding to a unique lock through consistent hash, where the management node is a node where a global control module is located, and sends a data block request message to the management node to manage the global control module.
The global control module of diagram 202, after receiving the request message, determines that the metadata state of the data block is A, C, and B is null, and then sends a dirty page data block forwarding message to the dirty page holding node (i.e., the fourth node in the above-mentioned C), and sends a release read lock command to each target node in a (shared lock holding node list), and both messages are accompanied by a shared lock release list, hereinafter abbreviated as s_release_list. If the node where the global control module is located and the dirty page holding node are the same node, the global control module calls a local concurrent scheduling module and executes data block transmission; if the global control module is a different node than the shared lock holding node, the global control module sends a write exclusive data block forwarding schedule to the shared lock holding node.
After receiving the instruction for sending the dirty pages of the data block, the concurrency scheduling module of the dirty page holding node in the diagram 203 erases the local dirty flag, clears the lock state, and then sends the data block message with the dirty flag and s_release_list to the request background process for requesting the exclusive lock.
After each target node in diagram 204, a (i.e., the shared lock holding node) receives the command to release the shared lock, the shared lock of the target data block is released, and then a confirmation message is sent to the background process of diagram 201 that sends the exclusive lock request, with s_release_list attached to the message.
The request background process sending the request can determine that the shared lock needs to be released and the corresponding node number through s_release_list attached to the message, and after the request background process collects the data block message, the request background process sends the data block to the database shared buffer area, and marks the data block as dirty according to the sent dirty mark, as shown in the diagram 201. After the read lock release message and the data block message are collected, the data block lock state is marked as an exclusive hold state, and the data block can be used.
Further, in the process that the global control module controls the related node to release the shared lock, the shared lock is released and a confirmation message is required to be sent to a background process of the sending request; all messages in the dispatching process are provided with s_release_list, a background process sending the request judges whether all read lock release messages are collected according to the s_release_list, and after all data blocks and all read lock release messages are collected, the request can be considered to be completed and read/write operation of the data blocks can be started.
Further, the concurrent scheduling module performs the lock level update, i.e., updating the exclusive lock to the shared lock, releasing the shared lock, or updating the shared lock to empty. If the concurrent scheduling module is used for executing the lock demotion operation, and if the data block is being used and the lock demotion cannot be executed immediately, the operation is recorded as a task to be done, and the task to be done is executed by the last used background process after the data block is used.
In the application, the concurrency scheduling module receives the execution command sent by the global control module, and operates the node database buffer, where the types of operations include: reading the shared mode forwarding data block, writing the exclusive mode forwarding data block and canceling the shared lock state.
In combination with the first aspect, the database resource scheduling system provided in the application further includes a request processing module for requesting a data block operation of a background process, including a request processing module and a concurrency scheduling module.
After the step of receiving the data block request sent by the request background process in step S110, the method further includes:
and S140, performing resource scheduling according to a preset concurrent scheduling rule under the condition that the number of the requests is multiple.
Namely, under the condition that the request concurrency exists, the request processing module is used for scheduling the resources according to a preset concurrency scheduling rule. Specific: and when the message does not receive the reply, if other background processes apply for the same data block at the same time, the subsequent request does not send the message. When the first application receives the reply, the subsequent requests can share the reply result of the request, a concurrent request list is used for describing the scene of the concurrent request, the concurrent request list is abbreviated as c_request_list, a request queue is registered for each data block in the list, and the queue is destroyed after the request is completed.
Since the level of exclusive locks is higher than shared locks, in the scenario in which the request processing module is involved, we define the following rule, abbreviated as { rule }:
(1) If the read request is sent first and then the write request is sent later, the write request cannot share the reply result of the read request, and the write request also needs to send out a data block request.
(2) If the write request is sent out first and then the read request is sent out later, the read request can share the recovery result of the write request, and the read request does not need to send out the request.
(3) If there is a first read request followed by one or more read requests, the subsequent processes do not issue requests, but wait for the result of the first request to share the result with the subsequent requests after receiving a reply.
(4) If there is a first write request followed by one or more write requests, the subsequent processes do not issue request data block requests, but wait for the result of the first request to share the result with the subsequent requests after receiving a reply.
The request processing module waits for a reply result after processing the request, and if s_release_list in the reply message is not empty, all shared lock release confirmation messages are required to be collected except for waiting for data block transmission, and then all request processes are completed; otherwise, the request is completed after receiving a reply, and the reply result will only be the data block send message (s_release_list is empty) or the disk read message.
Further, after completing a request, the request processing module loads the data block into the database shared buffer, then checks whether there are other request background processes concurrently requesting the same data block in the c_request_list, and if so, sends signals to other request background processes capable of sharing the current recovery result according to the rule defined in { rules }, so that the request background processes use the data of the shared buffer.
Further, the request processing module records the lock level of the data block in the node memory of the database after completing one request, wherein the lock level comprises a read shared lock or a write exclusive lock. When a database on a database node needs to use the data block in a read or write mode, if the data block lock level in the shared memory is greater than or equal to the use level, the memory data can be directly used, otherwise, a data block request needs to be sent to the global control module. The rules for the data block lock level are: exclusive lock > shared lock. When a background process on a database node needs to use a data block, a read operation corresponds to a shared lock use level and a write operation corresponds to an exclusive lock use level. For the background process, the data block is needed to be used, if the lock level of the database node is greater than or equal to the lock level needed to be used, the memory data can be directly used, otherwise, the data block request is needed to be sent to the global control module.
In this embodiment, the concurrent scheduling module is disposed on each database node of the multi-node shared storage database cluster, and the module receives the scheduling execution of the global control module, and executes the concurrent scheduling of the data blocks. The specific operation is: the method comprises the steps of reading a shared mode forwarding data block instruction, writing an exclusive mode forwarding data block instruction and canceling a shared lock state instruction.
When the global control module executes the scheduling task of the data block, if the data block in the memory is not being requested or is being used, immediately executing a scheduling instruction at the moment; if a data block in the memory is being requested or used, the concurrent scheduling module does not immediately execute the scheduling instruction, but rather executes the scheduling instruction that registers the data block for the task to be done.
The task to be done for the data block is performed by the last process used after the data block is used. When the concurrent scheduling module immediately executes the forwarding task or the task to be done registered by the module is executed, resource scheduling is needed according to the following preset concurrent scheduling rule so as to adjust the lock state of the node data block:
A. the lock state is not adjusted when the present database node holds a shared lock on the target data block and performs the forwarding task of the shared lock request.
B. When the local database node holds the shared lock on the target data block and executes the forwarding task of the exclusive lock request, the lock state of the target data block of the local database node is set to be null.
C. And when the database node holds the exclusive lock on the target data block and executes the forwarding task of the exclusive lock request, setting the lock state of the data block of the database node to be null.
D. When the local database node holds the shared lock on the target data block and executes the forwarding task of the exclusive lock request, the data block lock state of the local database node is set to be null.
E. When the local database node holds the shared lock on the target data block and executes the forwarding task of the exclusive lock request, the data block lock state of the local database node is set to be null.
F. When the local database node holds the shared lock on the target data block and executes the task of canceling the shared lock state, the data block lock state of the local database node is set to be null.
G. This behavior is illegal when the present database node holds an exclusive lock on the target data block and performs the task of canceling the shared lock state.
More specifically, referring to fig. 3 in combination with the above example, a flowchart of an execution of a request background process for requesting an exclusive lock scenario when A, C is not null and B is a null scenario is provided in an embodiment of the present invention.
Diagram 301 initiates a request. Wherein the request contains a data block, and the request is of the exclusive lock type.
The background process executing the request of diagram 302 adds the requested data block identification, lock type, to the c_request_list.
Diagram 303 sends a request message and waits for a reply. The request background process firstly calculates a management node of the data block corresponding to the exclusive lock through the consistent hash, sends an exclusive lock request message to the management node and waits for a reply.
The diagram 304 collects data block dirty labels. The request background process receives the data block reply message, marks the data block according to the dirty mark in the message, and can know that the read lock release confirmation needs to be waited according to the s_release_list in the message.
Illustration 305 flushes the shared lock release acknowledgement. Collecting read lock release confirmation for the request background process.
Diagram 306 marks the data block state as exclusive lock. After the request background process collects the read lock release confirmation and the data block, the state of the data block in the shared buffer of the node database is marked as a global exclusive lock holder.
Diagram 307 informs other concurrent request background processes. After the request is completed by the request background process, c_request_list is checked, and if other background processes have concurrent requests for the same data block, a signal is sent to the corresponding request background process to enable the corresponding request background process to directly use the shared buffer.
In a second aspect, an embodiment of the present application provides a database resource scheduling apparatus, with reference to fig. 5, including: a data block request accepting module 10, an acquiring module 20 and a scheduling module 30.
The data block request receiving module 10 is configured to receive a data block request sent by a background process, where the request carries a data block;
the obtaining module 20 is configured to obtain metadata information corresponding to the data block when the number of requests is unique;
the scheduling module 30 is configured to execute a corresponding resource scheduling mode according to the metadata information, so as to schedule the data block to the request background process.
In a third aspect, an embodiment of the present application provides a database resource scheduling system, as shown in fig. 4, including: the global control module is deployed on each database node of the multi-node shared storage database cluster and is used for receiving the request, carrying out data block lock management on the database nodes according to the request and sending out a scheduling control instruction; the scheduling control instruction includes: the method comprises a disk reading instruction, a shared lock mode forwarding data block instruction, an exclusive lock mode forwarding data block instruction and a shared lock canceling instruction.
According to the method and the device, the global control module is deployed at each node in the multi-node data sharing database cluster, and the preset scheduling mode is correspondingly selected and executed through the request quantity, the request type and the metadata information corresponding to the data blocks in the request, so that orderly, accurate and efficient resource scheduling is realized, and the effect of resource scheduling is greatly improved.
With reference to the third aspect, and referring to fig. 4, the database resource scheduling system provided in the embodiment of the present application further includes: and the concurrency scheduling module and the request processing module.
The concurrent scheduling module is deployed on each database node of the multi-node shared storage database cluster and is used for receiving the scheduling instruction of the global control module and executing the scheduling control of the data block resources and the authority.
And the request processing module is called by a database background process/thread and is used for sending a data block request to the global control module and processing a reply result.
The concurrent scheduling module is deployed on each database node of the multi-node shared storage database cluster and is used for receiving three scheduling instructions sent by the global control module, namely a read sharing mode forwarding data block instruction, a write exclusive mode forwarding data block instruction and a cancel sharing lock state instruction.
Specifically, when executing the instruction for forwarding the data block in the read sharing mode, if executing the instruction for forwarding the data block in the read sharing mode, the concurrency scheduling module needs to set the authority of the database node to the target data block as the read sharing authority. This means that if a data block with write-exclusive rights is executed to forward the data block instruction in read-share mode, the rights are downgraded from write-exclusive to read-share rights and then the data block is sent to the database background process that sent the request.
When executing the instruction of transmitting the data block in the exclusive writing mode, the authority of the database node to the target data block is set to be empty, the dirty mark of the data block is cancelled, and then the data block is sent to the database background process of the sending request. Accordingly, the database background process sending the request needs to perform dirty marking after receiving the data block.
When executing the shared lock state instruction, the authority of the database node to the target data block is set to be empty, and then a confirmation message is sent to the database background process sending the request.
And the concurrent scheduling module processes the scheduling instruction according to a preset rule. Specific:
if the block of data in memory is not being requested or is being used, the scheduling instruction is immediately executed.
If the data block in the memory is being requested or used, execution of the scheduling instruction is stopped, and the scheduling instruction of the task to be done is registered for the data block. The task to be done for the data block is performed by the last used process/existing after the data block is used.
In summary, the database resource scheduling system provided by the embodiment of the invention uses a streaming memory fusion scheduling mode to accurately and rapidly realize resource scheduling so as to reduce the data resource scheduling and control pressure of multiple nodes, large concurrency and high throughput in a memory fusion architecture. So as to realize orderly and efficient sharing of data among various database nodes in the shared storage database cluster architecture.
And (3) distributing IO of the data block resources to database background processes of all database nodes, and controlling read-write concurrency of the global data blocks to realize parallel flow control through time sequence message flows among the database nodes. The scheduling control mode has small pressure on global data block resource management and high parallelism, and ensures the concurrent use efficiency of the data blocks under the shared storage cluster architecture.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood by those skilled in the art in specific cases.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method of the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention for illustrating the technical solution of the present invention, but not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the foregoing examples, it will be understood by those skilled in the art that the present invention is not limited thereto: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (10)

1. A method for scheduling database resources, the method comprising:
receiving a request sent by a request background process, wherein the request carries a data block identifier;
acquiring metadata information corresponding to the data block according to the data block identifier;
and executing a corresponding resource scheduling mode according to the metadata information so as to schedule the data block to the request background process.
2. The method of claim 1, wherein the metadata information includes a list of shared lock-holding nodes, exclusive lock-holding node numbers, and previous modified node numbers;
and executing a corresponding resource scheduling mode according to the type of the metadata information, wherein the step comprises the following steps:
if the metadata information is empty, generating a disk reading instruction and sending the disk reading instruction to the request background process so that the request background process can read a disk by itself;
if any one of the shared lock holding list, the exclusive lock holding node number and the previous modified node number is valid, acquiring a request type corresponding to the request; the request type comprises an exclusive lock and a shared lock;
and determining and executing a corresponding resource scheduling mode according to the metadata information and the request type.
3. The method of claim 2, wherein the step of generating a read disk instruction and returning the read disk instruction to the requesting background process further comprises:
acquiring a request type corresponding to the request;
if the request type is a shared lock, adding the node number of the request background process to the shared lock holding node list;
and if the request type is an exclusive lock, adding the node number of the request background process to the node number held by the exclusive lock and the node number modified in the last time.
4. The method of claim 2, wherein the metadata information of the data block is: the shared lock holding list is valid, and the exclusive lock holding node number and the last modified node number are empty;
according to the metadata information and the request type, determining and executing a corresponding resource scheduling mode, wherein the method comprises the following steps:
if the request type is a shared lock, sending a shared lock mode data block forwarding instruction to any database node in the shared lock holding node list, so that the database node sends the data block to the request background process, and adding the node number of the request background process to the shared lock holding node list;
If the request type is exclusive lock, sending an exclusive lock mode data block forwarding instruction to any database node in the shared lock holding node list, so that the database node sends the data block to the request background process;
for each target database node in the shared lock holding node list, erasing a shared lock identifier of the target database node;
and simultaneously, clearing the shared lock holding node list, and adding the node number of the request background process to the exclusive lock holding node number and the previous modified node number.
5. The method of claim 2, wherein the metadata information of the data block is: the exclusive lock holds a node number and the previous modified node number valid, and the shared lock holds a list empty;
and determining and executing a corresponding resource scheduling mode according to the metadata information and the request type, wherein the method comprises the following steps of:
if the request type is a shared lock, sending a shared lock mode forwarding data block instruction to a database node in the exclusive lock holding node so that the database node sends the data block to the request background process; meanwhile, the node number of the request background process is added to the shared lock holding list, and the exclusive lock holding node number is emptied;
If the request type is exclusive lock, sending a shared lock mode forwarding data block instruction to a database node in the exclusive lock holding node so that the database node sends a data block to the request background process; and simultaneously, adding the node number of the request background process to the exclusive lock holding node number and the previous modification node number.
6. The method of claim 2, wherein the metadata information of the data block is: the shared lock holds and the previous modified node is valid, and the exclusive lock holds a node number empty;
and determining and executing a corresponding resource scheduling mode according to the metadata information and the request type, wherein the method comprises the following steps of:
if the request type is a shared lock, sending a shared lock mode forwarding data instruction to the node in the previous modification node so that the node in the previous modification node sends a data block to the request background process, and adding the node number of the request background process to the shared lock holding list;
and if the request type is exclusive lock, sending an exclusive lock mode forwarding data block instruction to the node in the previous modification node so that the node in the previous modification node can send the data block to the request background process, and simultaneously clearing the shared lock holding list and recording node numbers of the request background process in the exclusive lock holding node and the previous modification node.
7. The method of claim 1, further comprising, after the step of receiving the data block request sent by the requesting background process:
and carrying out resource scheduling according to a preset concurrent scheduling rule under the condition that the number of the requests is a plurality of.
8. A database resource scheduling apparatus, the apparatus comprising:
the data block request receiving module is used for receiving a data block request sent by a request background process, wherein the request carries a data block identifier;
the acquisition module is used for acquiring metadata information corresponding to the data block according to the data block identifier;
and the scheduling module is used for executing a corresponding resource scheduling mode according to the metadata information so as to schedule the data block to the request background process.
9. A database resource scheduling system, the system comprising: the global control module is deployed on each database node of the multi-node shared storage database cluster and is used for receiving a request, carrying out data block lock management on the database nodes according to the request and sending out a scheduling control instruction; the scheduling control instruction includes: the method comprises a disk reading instruction, a shared lock mode forwarding data block instruction, an exclusive lock mode forwarding data block instruction and a shared lock canceling instruction.
10. The system of claim 9, wherein the system further comprises: the concurrency scheduling module and the request processing module;
the concurrent scheduling module is deployed on each database node of the multi-node shared storage database cluster and is used for receiving the scheduling instruction of the global control module and executing the scheduling control of the data block resources and the authority;
and the request processing module is called by a database background process/thread and is used for sending a data block request to the global control module and processing a reply result.
CN202311675589.0A 2023-12-07 2023-12-07 Database resource scheduling method, device and system Pending CN117519945A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311675589.0A CN117519945A (en) 2023-12-07 2023-12-07 Database resource scheduling method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311675589.0A CN117519945A (en) 2023-12-07 2023-12-07 Database resource scheduling method, device and system

Publications (1)

Publication Number Publication Date
CN117519945A true CN117519945A (en) 2024-02-06

Family

ID=89758763

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311675589.0A Pending CN117519945A (en) 2023-12-07 2023-12-07 Database resource scheduling method, device and system

Country Status (1)

Country Link
CN (1) CN117519945A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118034613A (en) * 2024-04-11 2024-05-14 深圳市铨兴科技有限公司 Intelligent scheduling method, system and memory for storage space data

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6965893B1 (en) * 2000-12-20 2005-11-15 Oracle International Corporation Techniques for granting shared locks more efficiently
US20050289143A1 (en) * 2004-06-23 2005-12-29 Exanet Ltd. Method for managing lock resources in a distributed storage system
CN1945539A (en) * 2006-10-19 2007-04-11 华为技术有限公司 Method for distributing shared resource lock in computer cluster system and cluster system
US20080071997A1 (en) * 2006-09-15 2008-03-20 Juan Loaiza Techniques for improved read-write concurrency
US20090037367A1 (en) * 2007-07-30 2009-02-05 Sybase, Inc. System and Methodology Providing Workload Management in Database Cluster
CN101800763A (en) * 2009-02-05 2010-08-11 威睿公司 hybrid locking using network and on-disk based schemes
US20130290967A1 (en) * 2012-04-27 2013-10-31 Irina Calciu System and Method for Implementing NUMA-Aware Reader-Writer Locks
CN103458036A (en) * 2013-09-03 2013-12-18 杭州华三通信技术有限公司 Access device and method of cluster file system
US9471400B1 (en) * 2015-07-28 2016-10-18 International Business Machines Corporation Reentrant read-write lock algorithm
CN106897029A (en) * 2017-02-24 2017-06-27 郑州云海信息技术有限公司 A kind of control method and device of LVM data consistencies
CN110659303A (en) * 2019-10-10 2020-01-07 北京优炫软件股份有限公司 Read-write control method and device for database nodes
CN110727709A (en) * 2019-10-10 2020-01-24 北京优炫软件股份有限公司 Cluster database system
CN112148695A (en) * 2019-06-26 2020-12-29 华为技术有限公司 Resource lock management method and device
CN114647663A (en) * 2020-12-18 2022-06-21 北京国双科技有限公司 Resource processing method, device and system, electronic equipment and storage medium
CN115114305A (en) * 2022-04-08 2022-09-27 腾讯科技(深圳)有限公司 Lock management method, device, equipment and storage medium for distributed database
CN116303489A (en) * 2023-01-16 2023-06-23 北京优炫软件股份有限公司 Method and system for realizing layered local type meter lock
CN116303661A (en) * 2023-01-12 2023-06-23 北京万里开源软件有限公司 Processing method, device and system for sequences in distributed database
US20230222102A1 (en) * 2022-01-12 2023-07-13 Dell Products L.P. High-performance remote file system meta data
CN116483628A (en) * 2023-05-11 2023-07-25 瀚高基础软件股份有限公司 Distributed backup method and system based on locking

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6965893B1 (en) * 2000-12-20 2005-11-15 Oracle International Corporation Techniques for granting shared locks more efficiently
US20050289143A1 (en) * 2004-06-23 2005-12-29 Exanet Ltd. Method for managing lock resources in a distributed storage system
US20080071997A1 (en) * 2006-09-15 2008-03-20 Juan Loaiza Techniques for improved read-write concurrency
CN1945539A (en) * 2006-10-19 2007-04-11 华为技术有限公司 Method for distributing shared resource lock in computer cluster system and cluster system
US20090037367A1 (en) * 2007-07-30 2009-02-05 Sybase, Inc. System and Methodology Providing Workload Management in Database Cluster
CN101800763A (en) * 2009-02-05 2010-08-11 威睿公司 hybrid locking using network and on-disk based schemes
US20130290967A1 (en) * 2012-04-27 2013-10-31 Irina Calciu System and Method for Implementing NUMA-Aware Reader-Writer Locks
CN103458036A (en) * 2013-09-03 2013-12-18 杭州华三通信技术有限公司 Access device and method of cluster file system
US9471400B1 (en) * 2015-07-28 2016-10-18 International Business Machines Corporation Reentrant read-write lock algorithm
CN106897029A (en) * 2017-02-24 2017-06-27 郑州云海信息技术有限公司 A kind of control method and device of LVM data consistencies
CN112148695A (en) * 2019-06-26 2020-12-29 华为技术有限公司 Resource lock management method and device
US20220114145A1 (en) * 2019-06-26 2022-04-14 Huawei Technologies Co., Ltd. Resource Lock Management Method And Apparatus
CN110659303A (en) * 2019-10-10 2020-01-07 北京优炫软件股份有限公司 Read-write control method and device for database nodes
CN110727709A (en) * 2019-10-10 2020-01-24 北京优炫软件股份有限公司 Cluster database system
CN114647663A (en) * 2020-12-18 2022-06-21 北京国双科技有限公司 Resource processing method, device and system, electronic equipment and storage medium
US20230222102A1 (en) * 2022-01-12 2023-07-13 Dell Products L.P. High-performance remote file system meta data
CN115114305A (en) * 2022-04-08 2022-09-27 腾讯科技(深圳)有限公司 Lock management method, device, equipment and storage medium for distributed database
CN116303661A (en) * 2023-01-12 2023-06-23 北京万里开源软件有限公司 Processing method, device and system for sequences in distributed database
CN116303489A (en) * 2023-01-16 2023-06-23 北京优炫软件股份有限公司 Method and system for realizing layered local type meter lock
CN116483628A (en) * 2023-05-11 2023-07-25 瀚高基础软件股份有限公司 Distributed backup method and system based on locking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王怀兴: "关系数据库的共享、冲突及自适应锁定算法", 《现代图书情报技术》, no. 06, 25 June 1999 (1999-06-25), pages 25 - 27 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118034613A (en) * 2024-04-11 2024-05-14 深圳市铨兴科技有限公司 Intelligent scheduling method, system and memory for storage space data
CN118034613B (en) * 2024-04-11 2024-06-11 深圳市铨兴科技有限公司 Intelligent scheduling method, system and memory for storage space data

Similar Documents

Publication Publication Date Title
US5893097A (en) Database management system and method utilizing a shared memory
US5829022A (en) Method and apparatus for managing coherency in object and page caches
JP5006348B2 (en) Multi-cache coordination for response output cache
CN100394406C (en) High speed buffer storage distribution
EP1015983B1 (en) Data sharing method and computer architecture
US6269432B1 (en) Distributed transactional processing system having redundant data
US20060136472A1 (en) Achieving cache consistency while allowing concurrent changes to metadata
JP2005505808A5 (en)
US20060288008A1 (en) Append/read lock compatibility in a distributed file system
WO2006083327A2 (en) A new point-in-time copy operation
CN109582686B (en) Method, device, system and application for ensuring consistency of distributed metadata management
JP2013222373A (en) Storage system, cache control program, and cache control method
CN117519945A (en) Database resource scheduling method, device and system
CN114238518A (en) Data processing method, device, equipment and storage medium
CN112039970A (en) Distributed business lock service method, server, system and storage medium
CN105376269B (en) Virtual machine storage system and its implementation and device
CN111399753B (en) Method and device for writing pictures
US20050262084A1 (en) Storage control device and access control method
EP3467671B1 (en) Cache memory structure and method
CN114820218A (en) Content operation method, device, server and storage medium
US20120150924A1 (en) Apparatus for supporting continuous read/write in asymmetric storage system and method thereof
US7536422B2 (en) Method for process substitution on a database management system
JP2006164218A (en) Storage system and its cache control method
CN114579514B (en) File processing method, device and equipment based on multiple computing nodes
CN109828720A (en) Date storage method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination