[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112035579A - Graph management method, data storage method, data query method, device and storage medium - Google Patents

Graph management method, data storage method, data query method, device and storage medium Download PDF

Info

Publication number
CN112035579A
CN112035579A CN201910476543.3A CN201910476543A CN112035579A CN 112035579 A CN112035579 A CN 112035579A CN 201910476543 A CN201910476543 A CN 201910476543A CN 112035579 A CN112035579 A CN 112035579A
Authority
CN
China
Prior art keywords
graph
service nodes
node
service
instance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910476543.3A
Other languages
Chinese (zh)
Other versions
CN112035579B (en
Inventor
沈秋军
李道彪
王龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910476543.3A priority Critical patent/CN112035579B/en
Publication of CN112035579A publication Critical patent/CN112035579A/en
Application granted granted Critical
Publication of CN112035579B publication Critical patent/CN112035579B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application discloses a graph management method and device based on a graph database and a computer readable storage medium, and belongs to the field of data processing. The graph data system comprises a plurality of service nodes and a plurality of monitoring nodes, wherein the plurality of monitoring nodes comprise a master monitoring node and at least one slave monitoring node, and the method comprises the following steps: when the main monitoring node receives a graph management request sent by a client, N service nodes can be determined from the plurality of service nodes, and then the graph management request is sent to the N service nodes only, so that the N service nodes are indicated to manage the first graph instance according to the graph name of the first graph instance to be managed, wherein the graph name is at least carried in the graph management request. Since N is greater than or equal to 2 and less than the total number of the plurality of service nodes, that is, the main monitoring node only needs to instruct at least two service nodes to manage the first graph instance, when a large number of graph instances need to be managed, the problem that the management pressure of each service node is large is not caused.

Description

Graph management method, data storage method, data query method, device and storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a graph management method, a data storage method, a data query method, an apparatus, and a storage medium based on a graph database.
Background
A graph database is a database that visually presents relationships between data to a user through a plurality of entities and relationships of connections between the entities, and a graph data system is widely used because it can store data using the graph database.
The graph data system comprises a plurality of service nodes, each service node can manage the graph instance deployed on the service node, and the service node with the graph instance has the capacity of operating the graph data corresponding to the graph instance. At present, graph instances deployed on each service node are the same, that is, each graph instance needs to be deployed on all service nodes, so when a large number of graph instances are managed, each service node needs to manage the graph instance to be managed, resulting in a large management pressure of each service node.
Disclosure of Invention
The application provides graph management, data storage and data query methods, devices and storage media based on a graph database, which can solve the technical problem that when a large number of graph instances are managed, the management pressure of each service node is large.
In a first aspect, a graph management method based on a graph database is provided, a graph data system includes a plurality of service nodes and a plurality of monitoring nodes, the plurality of monitoring nodes includes a master monitoring node and at least one slave monitoring node, the method includes:
the method comprises the steps that a main monitoring node receives a graph management request sent by a client, wherein the graph management request at least carries a graph name of a first graph instance to be managed, and the first graph instance is used for describing the relationship among a plurality of entities;
the master monitoring node determining N serving nodes from the plurality of serving nodes, the N being greater than or equal to 2 and less than a total number of the plurality of serving nodes;
and the main monitoring node sends the graph management request to the N service nodes so as to indicate the N service nodes to manage the first graph instance according to the graph name of the first graph instance.
Optionally, the graph management request is a graph creation request, where the graph creation request also carries the identifiers of the multiple entities and the relationships among the multiple entities;
the sending, by the master monitoring node, the graph management request to the N service nodes to instruct the N service nodes to manage the first graph instance according to the graph name of the first graph instance, includes:
the master monitoring node sends the graph creation request to the N service nodes to instruct each of the N service nodes to deploy the first graph instance on itself.
Optionally, the graph data system further comprises a distributed data coordination node;
the master monitoring node determines N service nodes from the plurality of service nodes, including:
the main monitoring node acquires the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination node;
if the number of the deployed graph instances on each service node is the same, the main monitoring node randomly selects N service nodes from the service nodes;
and if the number of the graph instances deployed on each service node is different, the main monitoring node selects N service nodes from the plurality of service nodes according to the sequence from small to large of the number of the graph instances deployed on each service node.
Optionally, the graph management request is a graph deletion request;
the master monitoring node determines N service nodes from the plurality of service nodes, including:
the main monitoring node determines the service nodes with the first graph instance deployed from the plurality of service nodes as the N service nodes according to the graph name of the first graph instance;
the sending, by the master monitoring node, the graph management request to the N service nodes to instruct the N service nodes to manage the first graph instance according to the graph name of the first graph instance, includes:
the main monitoring node sends the graph deletion request to the N service nodes to indicate the N service nodes to delete the first graph instance and delete graph data corresponding to the first graph instance, wherein the graph data comprises attribute information of the entities and attribute information of relationships among the entities.
Optionally, the graph data system further comprises a distributed data coordination node;
after the master monitoring node sends the graph management request to the N service nodes, the method further includes:
and the main monitoring node sends an instance distribution updating request to the distributed data coordination node, wherein the instance distribution updating request carries the identifiers of the N service nodes and the graph name of the first graph instance, so as to indicate the distributed data coordination node to update the number and the graph name of the graph instances deployed on the N service nodes.
Optionally, the graph data system further comprises a distributed data coordination node;
the method further comprises the following steps:
the main monitoring node acquires the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination node every monitoring time;
when the difference between the number of the graph instances deployed on any two service nodes in the plurality of service nodes is greater than or equal to 2, the main monitoring node adjusts the graph instances deployed on the plurality of service nodes according to the number and the graph names of the graph instances deployed on the plurality of service nodes, so that the difference between the number of the graph instances deployed on any two service nodes in the plurality of service nodes after adjustment is smaller than 2.
Optionally, the graph data system further comprises a distributed data coordination node;
the method further comprises the following steps:
the main monitoring node monitors the running states of the plurality of service nodes, and the running states are used for indicating whether the service nodes are abnormal or normal;
when the main monitoring node determines that an abnormal service node exists in the plurality of service nodes, acquiring the number and the graph name of the graph instances deployed on the abnormal service node and the number and the graph name of the graph instances deployed on the rest service nodes except the abnormal service node from the distributed data coordination node;
the main monitoring node redeploys the graph instances deployed on the abnormal service nodes to the rest service nodes according to the number and the graph names of the graph instances deployed on the abnormal service nodes and the number and the graph names of the graph instances deployed on the rest service nodes;
and after the redeployment, the graph instances corresponding to the same graph name are deployed on N service nodes in the rest service nodes, and the number of the graph instances deployed on any two service nodes in the rest service nodes after the redeployment is the same or the difference between the number of the graph instances deployed on any two service nodes is 1.
In a second aspect, a method for storing data based on a graph database is provided, wherein the graph data system comprises a plurality of service nodes and a plurality of monitoring nodes, the plurality of monitoring nodes comprise a master monitoring node and at least one slave monitoring node, and the method comprises the following steps:
a first service node receives a data storage request sent by a client, wherein the data storage request carries first graph data to be stored and a graph name of a second graph instance;
the first service node is one of N service nodes, where the second graph instance is deployed, determined by the client from the multiple service nodes according to the graph name of the second graph instance, the second graph instance is an instance corresponding to the first graph data, the first graph data includes attribute information of at least one entity in multiple entities with a relationship and/or attribute information of a relationship among the multiple entities, and N is greater than or equal to 2 and less than the total number of the multiple service nodes;
and the first service node stores the first graph data according to the graph name of the second graph instance.
In a third aspect, a method for querying data based on a graph database is provided, where the graph data system includes a plurality of service nodes and a plurality of monitoring nodes, where the plurality of monitoring nodes includes a master monitoring node and at least one slave monitoring node, and the method includes:
a second service node receives a data query request sent by a client, wherein the data query request carries a graph name of a third graph instance;
the second service node is one service node of N service nodes, where the third graph instance is deployed, determined by the client from the multiple service nodes according to the graph name of the third graph instance, the third graph instance is an instance corresponding to second graph data to be queried, the second graph data includes attribute information of at least one entity in multiple entities with a relationship and/or attribute information of a relationship among the multiple entities, and N is greater than or equal to 2 and less than the total number of the multiple service nodes;
and the second service node carries out data query according to the graph name of the third graph instance and sends a query result to the client, wherein the query result comprises the second graph data.
In a fourth aspect, a graph management apparatus based on a graph database, the graph data system including a plurality of service nodes and a plurality of monitoring nodes, the plurality of monitoring nodes including a master monitoring node and at least one slave monitoring node, the graph management apparatus being applied to the master monitoring node, the graph management apparatus including:
a receiving module, configured to receive a graph management request sent by a client, where the graph management request at least carries a graph name of a first graph instance to be managed, and the first graph instance is used to describe a relationship between multiple entities;
a determining module to determine N serving nodes from the plurality of serving nodes, the N being greater than or equal to 2 and less than a total number of the plurality of serving nodes;
and the management module is used for sending the graph management request to the N service nodes so as to indicate the N service nodes to manage the first graph instance according to the graph name of the first graph instance.
Optionally, the graph management request is a graph creation request, where the graph creation request also carries the identifiers of the multiple entities and the relationships among the multiple entities;
the management module comprises:
a deployment submodule, configured to send the graph creation request to the N service nodes, so as to instruct each service node in the N service nodes to deploy the first graph instance on itself.
Optionally, the graph data system further comprises a distributed data coordination node;
the determining module comprises:
the obtaining submodule is used for obtaining the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination node;
a first selection submodule, configured to randomly select N service nodes from the plurality of service nodes if the number of the graph instances deployed on each service node is the same;
and the second selection submodule is used for selecting N service nodes from the plurality of service nodes according to the sequence from small to large of the number of the graph instances deployed on each service node if the number of the graph instances deployed on each service node is different.
Optionally, the graph management request is a graph deletion request;
the determining module comprises:
a determining submodule, configured to determine, according to the graph name of the first graph instance, a service node, to which the first graph instance is deployed, from the plurality of service nodes as the N service nodes;
the management module comprises: the method comprises the following steps:
and the deleting submodule is used for sending the graph deleting request to the N service nodes so as to indicate the N service nodes to delete the first graph instance and delete graph data corresponding to the first graph instance, wherein the graph data comprises attribute information of the entities and attribute information of relationships among the entities.
Optionally, the graph data system further comprises a distributed data coordination node;
the graph management apparatus further includes:
an updating module, configured to send an instance distribution update request to the distributed data coordination node, where the instance distribution update request carries the identifiers of the N service nodes and the graph name of the first graph instance, so as to indicate the distributed data coordination node to update the number of the graph instances and the graph name deployed on the N service nodes.
Optionally, the graph data system further comprises a distributed data coordination node;
the graph management apparatus further includes:
the first acquisition module is used for acquiring the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination nodes at intervals of monitoring duration;
and the adjusting module is used for adjusting the graph instances deployed on the plurality of service nodes according to the number of the graph instances deployed on the plurality of service nodes and the graph names when the difference value of the number of the graph instances deployed on any two service nodes in the plurality of service nodes is greater than or equal to 2, so that the difference value of the number of the graph instances deployed on any two service nodes in the plurality of service nodes after adjustment is smaller than 2.
Optionally, the graph data system further comprises a distributed data coordination node;
the graph management apparatus further includes:
the monitoring module is used for monitoring the running states of the plurality of service nodes, and the running states are used for indicating whether the service nodes are abnormal or normal;
a second obtaining module, configured to, when it is determined that an abnormal service node exists in the plurality of service nodes, obtain, from the distributed data coordination node, the number and the graph name of the graph instance deployed on the abnormal service node, and the number and the graph name of the graph instance deployed on the remaining service nodes except the abnormal service node;
the deployment module is used for relocating the graph instances deployed on the abnormal service nodes to the rest service nodes according to the number and the graph names of the graph instances deployed on the abnormal service nodes and the number and the graph names of the graph instances deployed on the rest service nodes;
and after the redeployment, the graph instances corresponding to the same graph name are deployed on N service nodes in the rest service nodes, and the number of the graph instances deployed on any two service nodes in the rest service nodes after the redeployment is the same or the difference between the number of the graph instances deployed on any two service nodes is 1.
In a fifth aspect, there is provided a graph database-based data storage device, where a graph data system includes a plurality of service nodes and a plurality of monitoring nodes, the plurality of monitoring nodes includes a master monitoring node and at least one slave monitoring node, the data storage device is applied in a first service node, and the data storage device includes:
the receiving module is used for receiving a data storage request sent by a client, wherein the data storage request carries first graph data to be stored and a graph name of a second graph instance;
the first service node is one of N service nodes, where the second graph instance is deployed, determined by the client from the multiple service nodes according to the graph name of the second graph instance, the second graph instance is an instance corresponding to the first graph data, the first graph data includes attribute information of at least one entity in multiple entities with a relationship and/or attribute information of a relationship among the multiple entities, and N is greater than or equal to 2 and less than the total number of the multiple service nodes;
and the storage module is used for storing the first graph data according to the graph name of the second graph instance.
In a sixth aspect, there is provided a data query device based on a graph database, where the graph data system includes a plurality of service nodes and a plurality of monitoring nodes, the plurality of monitoring nodes includes a master monitoring node and at least one slave monitoring node, the data query device is applied in a second service node, and the data query device includes:
the receiving module is used for receiving a data query request sent by a client, wherein the data query request carries a graph name of a third graph instance;
the second service node is one service node of N service nodes, where the third graph instance is deployed, determined by the client from the multiple service nodes according to the graph name of the third graph instance, the third graph instance is an instance corresponding to second graph data to be queried, the second graph data includes attribute information of at least one entity in multiple entities with a relationship and/or attribute information of a relationship among the multiple entities, and N is greater than or equal to 2 and less than the total number of the multiple service nodes;
and the query module is used for performing data query according to the graph name of the third graph instance and sending a query result to the client, wherein the query result comprises the second graph data.
In a seventh aspect, there is provided a graph management apparatus based on a graph database, the graph management apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of the first aspect described above.
In an eighth aspect, there is provided a graph database-based data storage device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of the second aspect.
In a ninth aspect, there is provided a data query device based on a graph database, the data query device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of the third aspect.
A tenth aspect provides a computer readable storage medium having stored thereon instructions which, when executed by a processor, carry out the steps of any of the methods of the first aspect described above.
In an eleventh aspect, a computer-readable storage medium is provided, having stored thereon instructions, which when executed by a processor, implement the steps of the method of the second aspect described above.
In a twelfth aspect, a computer-readable storage medium is provided, having instructions stored thereon, which when executed by a processor, implement the steps of the method of the third aspect described above.
In a thirteenth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the method of any of the first aspects above.
In a fourteenth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the steps of the method of the second aspect described above.
In a fifteenth aspect, a computer program product is provided comprising instructions which, when run on a computer, cause the computer to perform the steps of the method of the third aspect described above.
The beneficial effect that technical scheme that this application provided brought is:
in the application, the graph data system includes a plurality of service nodes and a plurality of monitoring nodes, where the plurality of monitoring nodes include a master monitoring node and at least one slave monitoring node, and when the master monitoring node receives a graph management request sent by a client, N service nodes may be determined from the plurality of service nodes, and then the graph management request is sent only to the N service nodes, so as to instruct the N service nodes to manage a first graph instance according to a graph name of the first graph instance to be managed, which is at least carried in the graph management request, and the first graph instance is used to describe a relationship between a plurality of entities. Since N is greater than or equal to 2 and less than the total number of the plurality of service nodes, that is, the main monitoring node only needs to instruct at least two service nodes to manage the first graph instance, instead of instructing each service node to manage the first graph instance, when a large number of graph instances need to be managed, the problem that the management pressure of each service node is large is not caused.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
fig. 2 is a schematic diagram of a service node, a monitoring node, a distributed data coordination node, and a storage database deployed on the same server 1021;
FIG. 3 is a flowchart of a graph database-based graph management method according to an embodiment of the present application;
FIG. 4 is a flowchart of a graph database-based graph management method according to an embodiment of the present application;
FIG. 5 is a flowchart of a graph database-based graph management method according to an embodiment of the present application;
FIG. 6 is a flow chart of a method for storing data based on a graph database according to an embodiment of the present application;
FIG. 7 is a flowchart of a method for query based on a graph database according to an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a graph database-based graph management apparatus according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a graph database based data storage device according to an embodiment of the present application;
FIG. 10 is a schematic diagram of an embodiment of a data query device based on a graph database;
fig. 11 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with aspects of the present application.
Before explaining the embodiments of the present application in detail, a system architecture related to the embodiments of the present application is explained.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application, and referring to fig. 1, the implementation environment includes a client 101 and a graph data system 102, where the graph data system 102 includes a plurality of service nodes and a plurality of monitoring nodes, where the plurality of monitoring nodes includes a master monitoring node and at least one slave monitoring node, and the graph data system 102 in fig. 1 includes 3 service nodes and 3 monitoring nodes as an example. The client is connected with the main monitoring node through a network, the client is connected with each service node through the network, and the main monitoring node is connected with each service node through the network. The client 101 is installed on the terminal through an installation package, which may be a Gremlin Driver. The terminal may be any Device capable of installing the client 101, such as a mobile phone, a PAD (Portable Android Device), or a computer.
The graph data system 102 may further include at least one distributed data coordination node, and in fig. 1, it is illustrated that the graph data system 102 includes 3 distributed data coordination nodes, and data among the distributed data coordination nodes may be synchronized in real time. The distributed data coordination nodes are connected with the client through a network, the distributed data coordination nodes are connected with each service node through a network, and the distributed data coordination nodes are connected with the main monitoring node through a network. The graph data system 102 may further include at least one storage database, and fig. 1 illustrates that the graph data system 102 includes 3 storage databases, and data among the storage databases may be synchronized in real time. The storage database is connected with each service node through a network, and the storage database is connected with the distributed data coordination nodes through a network.
It should be noted that the service node, the monitoring node, and the distributed data coordination node may be three independent servers, the storage database may also be deployed on one independent server, or at least two of the service node, the monitoring node, the distributed data coordination node, and the storage database may also be deployed on the same server. If at least two of the service node, the monitoring node, the distributed data coordination node and the storage database are deployed on the same server, the service node, the monitoring node and the distributed data coordination node can be regarded as processes on the server. As shown in fig. 2, fig. 2 is a schematic diagram of a service node, a monitoring node, a distributed data coordination node, and a storage database deployed on the same server 1021.
The functions of the main monitoring node, the service node, the distributed data coordination node and the storage database are briefly described as follows:
the main monitoring node has a graph management scheduling function, that is, the main monitoring node can receive a graph management request sent by the client, and further instruct the N service nodes to manage the first graph instance. The graph management scheduling comprises graph creation scheduling and graph deletion scheduling, wherein the graph creation scheduling refers to that the main monitoring node indicates the service node to deploy the graph instance, and the graph deletion scheduling refers to that the main monitoring node indicates the service node to delete the graph instance and the graph data corresponding to the graph instance. The main monitoring node also has a function of scheduling a timed task, wherein the timed task scheduling refers to that the main monitoring node acquires the number and the graph names of the graph instances deployed in the plurality of service nodes every monitoring time, and detects whether the number and the graph names of the graph instances deployed in the plurality of service nodes meet a deployment condition, the deployment condition refers to that the difference value of the number of the graph instances deployed in each two service nodes in the plurality of service nodes is less than 2, and when the deployment condition is not met, the graph instances deployed in the plurality of service nodes are adjusted. Illustratively, the master monitoring node may perform the process of timed task scheduling by running a timing tool (e.g., quartz). The main monitoring node also has a service node state monitoring function, wherein the service node state monitoring means that the main monitoring node monitors the running states of a plurality of service nodes so as to determine whether the service nodes are abnormal or normal. Illustratively, the Master monitoring node may be represented by a Master.
The service node is deployed with a graph instance and a graph name, and the graph name can be, for example, a graph instance 1, a graph instance 2, a graph instance 3 and the like. The service node comprises a graph management executor, and the graph management executor is used for executing corresponding actions of graph management scheduling, and illustratively, the method can comprise deploying a graph instance indicated by the master monitoring node, and deleting the graph instance and corresponding graph data which need to be deleted by the master monitoring node. The service node further comprises a timing task executor, wherein the timing task executor is used for executing actions corresponding to timing task scheduling, and illustratively, the graph instances deployed on the service node can be adjusted when the number and the graph names of the graph instances deployed on the service node do not meet the deployment conditions. Illustratively, the service node may be represented by a Janusgraph Server.
The distributed data coordination node can select a master monitoring node from a plurality of monitoring nodes, and the monitoring nodes which are not selected as the master monitoring node are used as slave monitoring nodes. The distributed data coordination node can also monitor the running state of the main monitoring node, and when the main monitoring node is determined to be abnormal through monitoring, one monitoring node can be reselected from at least one slave monitoring node to serve as the main monitoring node. Illustratively, the distributed data coordination node may be represented by zk (zookeeper), and the distributed data coordination node may implement election of the main monitoring node through paxos algorithm; the slave monitoring node may be denoted Backup.
After the distributed data coordination node elects the master monitoring node, the master monitoring node may obtain the number and the graph name of the graph instance deployed on each service node from the storage database, and store the obtained number and the obtained graph name of the graph instance deployed on each service node into the distributed data coordination node. When the main monitoring node needs to use the number and the graph name of the graph instance deployed on each service node, the main monitoring node can obtain the number and the graph name from the distributed data coordination node.
The distributed data coordination node can also store the connection address of each service node and the connection address of the main monitoring node. The client 101 may obtain a connection address of each service node and a connection address of the main monitoring node from the distributed data coordination node, and further establish a connection between the client 101 and each service node and a connection between the client 101 and the main monitoring node, respectively. The distributed data coordination node can also store a temporary file corresponding to each service node, and the main monitoring node can monitor the running state of each service node by monitoring the temporary file corresponding to each service node.
The storage Database is used for storing graph data corresponding to the graph instance, and may be, for example, HBase (Hadoop Database). It should be noted that the graph instances deployed on the service node and the graph data stored in the storage database may be collectively referred to as a graph database. In addition, if the service node receives a data query request sent by the client 101, the service node may query the graph data from the storage database and feed back the graph data to the client 101.
Based on the above description, the master monitoring node has the function of graph management scheduling, and the following method from step 301 to step 304 is used to describe the process of executing graph management scheduling by the master monitoring node in detail. FIG. 3 is a flowchart of a graph database-based graph management method according to an embodiment of the present application, and referring to FIG. 3, the method includes the following steps:
step 301: the method comprises the steps that a main monitoring node receives a graph management request sent by a client, the graph management request at least carries a graph name of a first graph instance to be managed, and the first graph instance is used for describing the relationship among a plurality of entities.
When a client needs to manage a graph instance deployed on a service node, for example, create or delete the graph instance, a graph management request may be sent to a main monitoring node through a connection established between the client and the main monitoring node, and then the main monitoring node may receive the graph management request sent by the client, where the graph management request at least carries a graph name of a first graph instance to be managed, and the first graph instance is used to describe a relationship between multiple entities.
The map management request may include a map creation request and a map deletion request. If the graph management request is a graph creation request, the graph creation request may carry the graph names of the first graph instance, and may also carry the identifiers of the multiple entities and the relationships between the multiple entities. Illustratively, the diagram name of the first diagram example is diagram a, the two entities are respectively a vehicle and an intersection, the identifiers of the two entities are respectively vehicle a and intersection B, and the relationship between the two entities is that the vehicle passes through the intersection. If the graph management request is a graph deletion request, the graph deletion request may only carry the graph name of the first graph instance.
In the related art, when graph management is performed, a user is required to manage a configuration file corresponding to each graph instance, for example, when a graph instance is created on a certain service node, the user is required to add the configuration file corresponding to the graph instance, and when the graph instance is deleted on a certain service node, the user is required to delete the configuration file corresponding to the graph instance. However, since the configuration file is manually managed by the user, the service node where the currently managed graph instance is located needs to be restarted each time the user manually manages the graph instance, and the currently managed graph instance can take effect after being restarted. In the embodiment of the application, a plurality of monitoring nodes are added in the graph data system, and the main monitoring node in the plurality of monitoring nodes receives the graph management request sent by the client, so that the automatic management of the graph instance by the main monitoring node is realized, the service node where the currently managed graph instance is located does not need to be restarted, and the problem that the service node needs to be restarted due to the fact that a user manually manages the graph instance is avoided.
Step 302: the main monitoring node determines N service nodes from a plurality of service nodes included in the graph data system, wherein N is greater than or equal to 2 and less than the total number of the plurality of service nodes.
If the graph management request in step 301 is a graph creation request, the main monitoring node may obtain, from the distributed data coordination node, the number and the graph name of the graph instances deployed on each service node in the plurality of service nodes, if the number of the graph instances deployed on each service node is the same, the main monitoring node may randomly select N service nodes from the plurality of service nodes, and if the number of the graph instances deployed on each service node is different, the main monitoring node may select N service nodes from the plurality of service nodes in order of the number of the graph instances deployed on each service node from small to large.
It should be noted that, if the number of the graph instances deployed on each service node is different, the main monitoring node may sequence the plurality of service nodes in the order from small to large according to the number of the graph instances deployed on each service node, and take the top N service nodes after the sequencing as the selected N service nodes.
It should be further noted that the different number of graph instances deployed on each service node includes the following two cases: the first is that the number of graph instances deployed on each service node is completely different, and the second is that the number of graph instances deployed on each service node is not completely the same. For the first case, the main monitoring node may sequence the plurality of service nodes in the order from small to large according to the number of the graph instances deployed on each service node. For the second case, the main monitoring node may first group service nodes with the same number of deployed graph instances to obtain multiple groups of service nodes. And then selecting one service node from each group of service nodes so as to select and obtain a plurality of service nodes, and sequencing the plurality of groups of service nodes according to the sequence from small to large of the number of the graph instances deployed in the plurality of service nodes obtained by selection. The sequence among the service nodes in each group of service nodes is randomly arranged.
If the graph management request in step 301 is a graph deletion request, the main monitoring node may determine, from the plurality of service nodes, the service node to which the first graph instance is deployed as the N service nodes according to the graph name of the first graph instance.
It should be noted that the master monitoring node may obtain, from the distributed data coordination node, a graph name of a graph instance deployed on each service node in the multiple service nodes, determine, according to the graph name of the first graph instance, a service node to which the first graph instance belongs, and further determine, as the N service nodes, the service node to which the first graph instance belongs.
Step 303: and the main monitoring node sends the graph management request to the N service nodes so as to indicate the N service nodes to manage the first graph instance according to the graph name of the first graph instance.
If the graph management request in step 301 is a graph creation request, the master monitoring node may send a graph creation request to the N service nodes to instruct each of the N service nodes to deploy the first graph instance on itself.
Since N is greater than or equal to 2, that is, the first graph instance is deployed on at least two service nodes, the graph data system in the embodiment of the present application satisfies the characteristic of high availability. In addition, since N is smaller than the total number of the plurality of service nodes included in the graph data system, that is, the first graph instance is not deployed on all the service nodes, the embodiment of the application avoids the problem that the deployment pressure of each service node in the graph data system is large.
It should be noted that, after each of the N service nodes deploys the first graph instance on itself, a graph successful creation response may be sent to the main monitoring node, and then the main monitoring node may forward the graph successful creation response to the client, so that the client knows that the first graph instance has been successfully created.
If the graph management request in step 301 is a graph deletion request, the master monitoring node may send a graph deletion request to the N service nodes to instruct the N service nodes to delete the first graph instance and delete graph data corresponding to the first graph instance, where the graph data includes attribute information of a plurality of entities and attribute information of relationships between the plurality of entities.
It should be noted that after the N service nodes delete the first graph instance and delete the graph data corresponding to the first graph instance, a graph successful deletion response may be sent to the main monitoring node, and then the main monitoring node may forward the graph successful deletion response to the client, so that the client knows that the first graph instance and the graph data corresponding to the first graph instance have been successfully deleted.
It should be further noted that the attribute information of the multiple entities is information for describing characteristics of the entities, for example, the attribute information of the vehicle includes a vehicle color, a license plate number, a vehicle type, and the like, and the attribute information of the intersection includes a geographic coordinate, an intersection type, and the like, and the intersection type may include, for example, a main road intersection, a sidewalk intersection, a one-way road intersection, and the like. The attribute information of the relationship between the plurality of entities is information for describing a feature of the relationship, and for example, the attribute information of the vehicle passing through the intersection includes a passing time and the like.
Step 304: and the main monitoring node sends an instance distribution updating request to the distributed data coordination node, wherein the instance distribution updating request carries the identifiers of the N service nodes and the graph name of the first graph instance so as to indicate the distributed data coordination node to update the number and the graph name of the graph instances deployed on the N service nodes.
Since the number and the graph names of the graph instances deployed on each service node are stored in the distributed data coordination node, and when the main monitoring node needs to use the number and the graph names of the graph instances deployed on each service node, the number and the graph names of the graph instances deployed on each service node are obtained from the distributed data coordination node, in order to ensure the accuracy of the number and the graph names of the graph instances deployed on each service node stored in the distributed data coordination node and further ensure the accuracy of the graph management performed by the main monitoring node, each time the main monitoring node sends a graph management request to the N service nodes to indicate the N service nodes to manage the first graph instance according to the graph name of the first graph instance, an instance distribution update request can be sent to the distributed data coordination node to indicate the distributed data coordination node to update the number and the graph names of the graph instances deployed on the N service nodes, the instance distribution update request carries the identifiers of the N service nodes and the graph name of the first graph instance.
For example, if the graph management request is a graph creation request, that is, the main monitoring node deploys the first graph instance on each of the N service nodes, the process that the main monitoring node instructs the distributed data coordination node to update the number of the graph instances and the graph names deployed on the N service nodes may be: and adding the graph name of the first graph instance to each service node in the N service nodes, and adding 1 to the number of the graph instances deployed on each service node in the N service nodes.
If the graph management request is a graph deletion request, that is, the master monitoring node deletes the first graph instance on each of the N service nodes, the process that the master monitoring node instructs the distributed data coordination node to update the number and the graph names of the graph instances deployed on the N service nodes may be: and deleting the graph name of the first graph instance in each service node in the N service nodes, and reducing the number of the graph instances deployed on each service node in the N service nodes by 1.
In the embodiment of the application, the graph data system includes a plurality of service nodes and a plurality of monitoring nodes, where the plurality of monitoring nodes include a master monitoring node and at least one slave monitoring node, and when the master monitoring node receives a graph management request sent by a client, N service nodes may be determined from the plurality of service nodes, and then the graph management request is sent only to the N service nodes, so as to instruct the N service nodes to manage a first graph instance according to a graph name of the first graph instance to be managed, which is at least carried in the graph management request, where the first graph instance is used to describe a relationship between a plurality of entities. Since N is greater than or equal to 2 and less than the total number of the plurality of service nodes, that is, the main monitoring node only needs to instruct at least two service nodes to manage the first graph instance, instead of instructing each service node to manage the first graph instance, when a large number of graph instances need to be managed, the problem that the management pressure of each service node is large is not caused.
The master monitoring node also has a function of performing timing task scheduling, and the following method from step 401 to step 404 is used to describe the process of performing timing task scheduling by the master monitoring node in detail. Fig. 4 is a flowchart of a graph database-based graph management method according to an embodiment of the present application, and referring to fig. 4, the method includes:
step 401: and the main monitoring node acquires the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination node every monitoring time.
The main monitoring node may deploy the graph instances in the service nodes, or delete the graph instances deployed in the service nodes, which may cause the number and the names of the graph instances deployed on the service nodes to change, and therefore, the main monitoring node may perform the timed task scheduling, that is, a process of acquiring the number and the names of the graph instances deployed on each of the plurality of service nodes from the distributed data coordination node every monitoring time.
The detection time period may be 1 hour, 1 day, 1 week, 1 month, and the like, and the detection time period is not limited in the embodiment of the present application.
Step 402: and the main monitoring node monitors whether the difference value of the number of the graph instances deployed on any two service nodes in the plurality of service nodes is greater than or equal to 2.
If the number of the graph instances deployed on the service node changes, the number of the graph instances may not meet the deployment condition. The deployment condition means that the difference of the number of the deployment graph instances on each two service nodes in the plurality of service nodes is less than 2. That is, there may be a case where the difference between the numbers of the graph instances deployed on any two service nodes in the plurality of service nodes is greater than or equal to 2. Therefore, the main monitoring node may monitor whether the number of the graph instances deployed on the multiple service nodes meets the deployment condition, that is, whether the difference between the number of the graph instances deployed on any two service nodes in the multiple service nodes is greater than or equal to 2.
Step 403: when the difference value between the number of the graph instances deployed on any two service nodes in the plurality of service nodes is greater than or equal to 2, the main monitoring node adjusts the graph instances deployed on the plurality of service nodes according to the number and the graph names of the graph instances deployed on the plurality of service nodes, so that the difference value between the number of the graph instances deployed on any two service nodes in the plurality of service nodes after adjustment is smaller than 2.
When the master monitoring node adjusts the graph instances deployed on the multiple service nodes, one or more first graph names and one or more second graph names may be determined according to the number and the graph names of the graph instances deployed on the first service node and the number and the graph names of the graph instances deployed on the second service node. The main monitoring node sends a first graph deletion request to the first service node, wherein the first graph deletion request carries the one or more first graph names and is used for indicating the first service node to delete the graph instances corresponding to the one or more first graph names. And the main monitoring node sends a graph adding request to the second service node, wherein the graph adding request carries one or more second graph names and is used for indicating the second service node to add the graph instances corresponding to the one or more second graph names. The first service node refers to a service node with a large number of graph instances deployed on any two service nodes, and the second service node refers to a service node with a small number of graph instances deployed on any two service nodes.
The first diagram name may be the same as the second diagram name. The number of the first graph name and the number of the second graph name may both be one, and for example, fig. 1, fig. 2, and fig. 3 are deployed on the first service node, and fig. 2 is deployed on the second service node, then the first graph name may be fig. 1 or fig. 3, and the second graph name may also be fig. 1 or fig. 3. The number of the first graph name and the number of the second graph name may be multiple, for example, fig. 1, fig. 2, fig. 3, fig. 4, and fig. 5 are deployed on the first service node, and fig. 2 is deployed on the second service node, then the first graph name may be any two graph instances in fig. 1, fig. 3, fig. 4, and fig. 5, and the second graph name may also be any two graph instances in fig. 1, fig. 3, fig. 4, and fig. 5. For convenience of description, the diagram instance P is referred to as a diagram P in the above examples, where P is an integer greater than or equal to 1, and this description method will be used in the following examples.
In addition, the number of the graph instances deployed on any two service nodes can be adjusted by other adjustment manners, and only the condition that the difference between the number of the graph instances deployed on any two service nodes after adjustment is less than 2 needs to be satisfied, which is not limited in the embodiment of the present application.
Step 404: when the difference of the number of the graph instances deployed on each two service nodes in the plurality of service nodes is less than 2, the main monitoring node does not adjust the number of the graph instances deployed on the plurality of service nodes.
When the difference between the numbers of the graph instances deployed on each two service nodes in the plurality of service nodes is less than 2, it is indicated that the number of the graph instances deployed on the plurality of service nodes satisfies the deployment condition, and therefore, the main monitoring node may not adjust the number of the graph instances deployed on the plurality of service nodes.
In the embodiment of the application, the main monitoring node may execute a process of scheduling a timed task, that is, the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes are obtained from the distributed data coordination node every monitoring duration, and when it is detected that there is a service node that does not satisfy the deployment condition, the number of the graph instances deployed on the service node that does not satisfy the deployment condition may be adjusted, so that it is ensured that the service node that does not satisfy the deployment condition satisfies the deployment condition after being adjusted. Because the deployment condition means that the difference between the numbers of the graph instances deployed on each two service nodes in the plurality of service nodes is less than 2, that is, the numbers of the graph instances deployed on the plurality of service nodes are relatively balanced, the problems of unbalanced load and relatively high deployment pressure caused by a relatively large number of the graph instances deployed on a single service node are avoided.
The master monitoring node also has a service node status monitoring function, and the following method from step 501 to step 504 is used to describe the process of the master monitoring node performing service node status monitoring in detail. Fig. 5 is a flowchart of a graph database-based graph management method according to an embodiment of the present application, and referring to fig. 5, the method includes:
step 501: the main monitoring node monitors the running states of the plurality of service nodes, and the running states are used for indicating whether the service nodes are abnormal or normal.
Since the plurality of service nodes may be abnormal, the master monitoring node may perform a service node status monitoring process to monitor whether the service node is abnormal or normal.
It should be noted that the communication connection between the distributed data coordination node and each service node may be a long connection. When the long connection between the distributed data coordination node and a certain service node is normal, the temporary file corresponding to the service node can be stored in the distributed data coordination node, and when the long connection between the distributed data coordination node and the service node is abnormal, the temporary file related to the service node in the distributed data coordination node can be deleted. That is, when the service node is online, a temporary file is generated in the distributed data coordination node. When the service node is abnormal, namely is offline, the temporary file about the service node is not stored in the distributed data coordination node. Therefore, the main monitoring node can monitor whether the temporary file corresponding to each service node exists. If the data exists, the service node is normal. If not, the service node is abnormal.
The main monitoring node can monitor the running states of the plurality of service nodes every state monitoring time, and can also monitor the running states of the plurality of service nodes in real time.
Step 502: when the main monitoring node determines that an abnormal service node exists in the plurality of service nodes, the number and the graph name of the graph instance deployed on the abnormal service node and the number and the graph name of the graph instance deployed on the rest service nodes except the abnormal service node are obtained from the distributed data coordination node.
Illustratively, the remaining service nodes other than the abnormal service node are normal.
Step 503: and the main monitoring node redeploys the graph instances deployed on the abnormal service nodes to the rest service nodes according to the number and the graph names of the graph instances deployed on the abnormal service nodes and the number and the graph names of the graph instances deployed on the rest service nodes.
And after the redeployment, the graph instances corresponding to the same graph name are deployed on N service nodes in the remaining service nodes, and the number of the graph instances deployed on any two service nodes in the remaining service nodes after the redeployment is the same or the difference between the number of the graph instances deployed on any two service nodes is 1.
The main monitoring node may redeploy the graph instance deployed on the abnormal service node to the remaining service nodes according to the following two steps:
step 5031: and the main monitoring node determines the M service nodes and one or more third graph names corresponding to each service node in the M service nodes from the rest service nodes according to the number and the graph names of the graph instances deployed on the abnormal service node and the number and the graph names of the graph instances deployed on the rest service nodes.
The third graph name refers to the graph name of the graph instance deployed on the abnormal service node, and M is greater than or equal to 1 and less than or equal to the total number of the remaining service nodes.
It should be noted that, when determining the M service nodes, the main monitoring node may determine the service node corresponding to each third graph name, and determine the service nodes corresponding to all the third graph names as the M service nodes. And the graph name of the graph instance deployed on the service node corresponding to each third graph name is different from the third graph name. That is, when determining the service node corresponding to each third graph name, the main monitoring node may determine, from the remaining service nodes, any service node having a graph name of the deployed graph instance different from the third graph name, and determine any determined service node as the service node corresponding to the third graph name.
Illustratively, fig. 1 and fig. 4 are deployed on the service node 1, fig. 1 and fig. 2 are deployed on the service node 2, fig. 2 and fig. 3 are deployed on the service node 3, and fig. 3 and fig. 4 are deployed on the service node 4. If the service node 2 is abnormal, the third graph name is the graph 1 and the graph 2 deployed on the service node 2, the service node corresponding to the graph 1 is the service node 3 or the service node 4, and the service node corresponding to the graph 2 is the service node 1 or the service node 4.
It should be further noted that the third graph name and the service node may be in a one-to-one correspondence relationship, that is, one third graph name corresponding to each service node in the M service nodes. The third graph name and the service node may also be in a many-to-one relationship, i.e., each of the M service nodes corresponds to multiple third graph names.
Step 5032: and the main monitoring node redeploys the graph instance deployed on the abnormal service node to the M service nodes according to one or more third graph names corresponding to each service node in the M service nodes.
When the service node is abnormal, the graph instance deployed on the abnormal service node can be always redeployed to other normal service nodes. Therefore, when there are at least two normal service nodes, it can be ensured that each graph instance is deployed in at least two normal service nodes, thereby ensuring high availability of the graph data system.
It should be noted that, since the master monitoring node may perform the service node status monitoring process, when only one normal service node remains, all graph instances are deployed on the normal service node. That is, the graph data system in the embodiment of the present application allows at most X-1 service nodes to be abnormal, where X is the total number of the plurality of service nodes included in the graph data system.
It should be further noted that, since the numbers of the graph instances deployed on any two service nodes in the remaining service nodes after the redeployment are the same or the difference between the numbers of the graph instances deployed on any two service nodes is 1, the number of the graph instances deployed on each service node is more balanced, and the problems of unbalanced load and higher deployment pressure caused by a larger number of the graph instances deployed on a single service node are avoided.
Step 504: when the main monitoring node determines that abnormal service nodes do not exist in the plurality of service nodes, the graph instance on the service node is not redeployed.
In this embodiment of the present application, the main monitoring node may perform a process of monitoring the states of the service nodes, that is, monitor the operating states of the plurality of service nodes. When it is determined through monitoring that an abnormal service node exists, the main monitoring node may redeploy the graph instance deployed on the abnormal service node to the remaining service nodes. And the graph instances corresponding to the same graph name are ensured to be deployed on N service nodes in the rest service nodes after redeployment, so that the problem that the management pressure of each service node is large is not caused when a large number of graph instances need to be managed. In addition, the number of the graph instances deployed on every two service nodes after redeployment is ensured to be the same or the difference value between the graph instances is 1, so that the process of executing the service node state monitoring can ensure that each graph instance is at least stored in two service nodes under the condition of at least two normal service nodes, and further ensure the high availability of the graph data system. And the problems of unbalanced load and high deployment pressure caused by the large number of diagram instances deployed on a single service node are also avoided.
It should be noted that, in the embodiment of the present application, the sequence of steps 301 to 304, steps 401 to 404, and steps 501 to 504 is not limited, and may be adjusted appropriately, or steps may be increased or decreased according to the circumstances, for example, steps 301 to 304 and steps 401 to 404 may be executed synchronously, and any method that can be easily conceived by a person skilled in the art within the technical scope disclosed in the present application is included in the protection scope of the present application, and therefore, the description is omitted.
The following method of steps 601-603 describes the detailed process of implementing the database-based data storage method by the first service node. Fig. 6 is a flowchart of a data storage method based on a graph database according to an embodiment of the present application, and referring to fig. 6, the method includes the following steps:
step 601: and the client determines N service nodes with the second graph instance deployed from the plurality of service nodes according to the graph name of the second graph instance corresponding to the first graph data to be stored, wherein N is greater than or equal to 2 and less than the total number of the plurality of service nodes.
Since each graph instance is deployed on N service nodes in the plurality of service nodes, when the client needs to store the first graph data, the N service nodes where the second graph instance is deployed can be determined according to the graph name of the second graph instance corresponding to the first graph data.
It should be noted that, according to the graph name of the second graph instance corresponding to the first graph data, the process of determining that the N service nodes of the second graph instance are deployed by the client may be: the client acquires the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination nodes, and the service nodes with the graph names of the graph instances deployed in the plurality of service nodes being the same as the graph name of the second graph instance are used as the N service nodes.
It should be further noted that the first graph data includes attribute information of at least one entity of the plurality of entities having a relationship and/or attribute information of a relationship between the plurality of entities.
Step 602: and the client sends a data storage request to a first service node in the N service nodes, wherein the data storage request carries the first graph data and the graph name of the second graph instance.
It should be noted that the first service node is any service node in the N service nodes. Since each of the N service nodes is deployed with the second graph instance, the client may select any service node, i.e., the first service node, from the N service nodes, and further send a data storage request to the first service node.
It should be further noted that the client may select any service node from the N service nodes through a Round Robin Scheduling (Round Robin) algorithm, or may select any service node from the N service nodes through another algorithm, which is not limited in this embodiment of the present application.
In addition, the client may send a data storage request to the first service node through a connection with the first service node.
Step 603: and the first service node receives a data storage request sent by the client and stores the first graph data according to the graph name of the second graph instance.
After receiving the data storage request sent by the client, the first service node may first determine all graph data indicated by the graph name of the second graph instance, then determine, from all the graph data, graph data associated with the first graph data, and further store the first graph data in association with the associated graph data. The associative memory may be: and allocating a field for identifying the first graph data to the graph data associated with the first graph data, and storing the first graph data to the field. Of course, the first diagram data may also be stored in other storage manners, which is not limited in this application embodiment.
It should be noted that if the first graph data is attribute information of an entity, the graph data associated with the first graph data is an identifier of the entity, and if the first graph data is attribute information of a relationship between a plurality of entities, the graph data associated with the first graph data is a relationship between the plurality of entities.
For example, the first map data is attribute information of the vehicle a, and the first map data is stored in association with the vehicle a, for example, a property field is assigned to the vehicle a, and the attribute information of the vehicle a is stored in the property field.
It should also be noted that the first service node may store the first graph data in a storage database included in the graph data system.
In addition, data deletion can be performed based on a graph database in addition to data storage based on a graph database. The process of deleting data based on the graph database may be: and the client determines N service nodes with the fourth graph instance deployed from the plurality of service nodes according to the graph name of the fourth graph instance corresponding to the third graph data to be deleted, wherein N is greater than or equal to 2 and less than the total number of the plurality of service nodes. And the client sends a data deletion request to a third service node in the N service nodes, wherein the data deletion request carries the third graph data and the graph name of the fourth graph instance. And the third service node receives a data deletion request sent by the client, and deletes the third graph data according to the graph name of the fourth graph instance. The third graph data comprises an identification of at least one of the plurality of entities and/or a relationship between the plurality of entities, or the third graph data comprises attribute information of at least one of the plurality of entities and/or attribute information of a relationship between the plurality of entities.
It should be noted that, after receiving the data deletion request sent by the client, the third service node may determine all the graph data indicated by the graph name of the fourth graph instance, and then delete the third graph data according to the all the graph data.
If the third graph data includes an identification of at least one of the plurality of entities and/or a relationship between the plurality of entities, the third service node may determine the third graph data and the graph data associated with the third graph data from all the graph data indicated by the graph name of the fourth graph instance, and delete the third graph data and the graph data associated with the third graph data. For example, if the third map data is the identification of the vehicle a, the vehicle a and the map data associated with the vehicle a, that is, the attribute information of the vehicle a, are deleted. If the third graph data includes attribute information of at least one of the plurality of entities and/or attribute information of relationships between the plurality of entities, the third service node may determine the third graph data from all graph data indicated by the graph name of the fourth graph instance and delete the third graph data.
It should be further noted that, in the process of deleting data based on the graph database, the process of determining the N service nodes to which the fourth graph instance is deployed is similar to the process of determining the N service nodes to which the second graph instance is deployed in step 601, and details of the embodiment of the present application are not repeated here. When the client sends a data deletion request to a third service node of the N service nodes, the third service node may also be selected by a Round Robin Scheduling algorithm or other algorithms.
In the embodiment of the present application, in the scenario of performing graph management by using the method in steps 301 to 34, a data storage request carrying the graph name of the first graph data and the second graph instance to be stored, which is sent by a client, may be received by the first service node, and data storage is performed according to the technical scheme that the graph name of the second graph instance stores the first graph data, so that the beneficial effect of accurately storing the graph data in the scenario of performing graph management by using the method in steps 301 to 304 is achieved.
The process of implementing the graph database-based data query method by the second service node is described in detail by the following methods of steps 701-703. Fig. 7 is a flowchart of a data query method based on a graph database according to an embodiment of the present application, and referring to fig. 7, the method includes the following steps:
step 701: and the client determines N service nodes with the third graph instance deployed from the plurality of service nodes according to the graph name of the third graph instance corresponding to the second graph data to be inquired, wherein N is greater than or equal to 2 and less than the total number of the plurality of service nodes.
Since each graph instance is deployed on N service nodes in the plurality of service nodes, when the client needs to query the second graph data, the N service nodes where the third graph instance is deployed can be determined according to the graph name of the third graph instance corresponding to the second graph data.
It should be noted that, according to the graph name of the third graph instance corresponding to the second graph data, the process of determining that the N service nodes of the third graph instance are deployed by the client may be: the client acquires the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination nodes, and the service node with the graph name of the graph instance deployed in the plurality of service nodes being the same as that of the third graph instance is used as the N service nodes.
It should be further noted that the second graph data includes attribute information of at least one entity of the plurality of entities having the relationship and/or attribute information of the relationship between the plurality of entities.
Step 702: and the client sends a data query request to a second service node in the N service nodes, wherein the data query request carries the graph name of the third graph instance.
It should be noted that the second service node is any service node in the N service nodes. Since each of the N service nodes is deployed with the third graph instance, the client may select any service node, i.e., the second service node, from the N service nodes, and further send a data query request to the second service node.
It should be further noted that the client may select any service node from the N service nodes through a Round Robin Scheduling (Round Robin) algorithm, or may select any service node from the N service nodes through another algorithm, which is not limited in this embodiment of the present application.
In addition, the client may send a data storage request to the second service node through a connection with the second service node.
Step 703: and the second service node receives the data query request sent by the client, performs data query according to the graph name of the third graph instance, and sends a query result to the client, wherein the query result comprises second graph data.
After receiving the data query request sent by the client, the second service node may query the second graph data in the storage database included in the graph data system according to the graph name of the third graph instance, and feed back the second graph data to the client.
In the embodiment of the application, in the scenario of performing graph management by using the methods in steps 301 to 304, a data query request carrying a graph name of a third graph instance corresponding to second graph data to be queried, which is sent by a client, may be received by a second service node, data query is performed according to the graph name of the third graph instance, and a query result including the second graph data is sent to the client for data query, so that the beneficial effect of accurately querying the graph data in the scenario of performing graph management by using the methods in steps 301 to 304 is achieved.
Fig. 8 is a flowchart of a graph database-based graph management apparatus according to an embodiment of the present application, and referring to fig. 8, the graph management apparatus is applied to a master monitoring node. The map management apparatus includes a receiving module 801, a determining module 802, and a managing module 803.
A receiving module 801, configured to receive a graph management request sent by a client, where the graph management request at least carries a graph name of a first graph instance to be managed, and the first graph instance is used to describe a relationship between multiple entities;
a determining module 802, configured to determine N serving nodes from the plurality of serving nodes, where N is greater than or equal to 2 and less than a total number of the plurality of serving nodes;
the management module 803 is configured to send the graph management request to the N service nodes to instruct the N service nodes to manage the first graph instance according to the graph name of the first graph instance.
Optionally, the graph management request is a graph creation request, where the graph creation request also carries the identifiers of the multiple entities and the relationships between the multiple entities;
the management module 803 includes:
and the deployment submodule is used for sending the graph creation request to the N service nodes so as to instruct each service node in the N service nodes to deploy the first graph instance on the service node.
Optionally, the graph data system further comprises a distributed data coordination node;
the determining module 802 includes:
the obtaining submodule is used for obtaining the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination node;
a first selection submodule, configured to randomly select N service nodes from the multiple service nodes if the number of the graph instances deployed on each service node is the same;
and the second selection submodule is used for selecting N service nodes from the plurality of service nodes according to the sequence from small to large of the number of the graph instances deployed on each service node if the number of the graph instances deployed on each service node is different.
Optionally, the graph management request is a graph deletion request;
the determining module 802 includes:
a determining submodule, configured to determine, according to the graph name of the first graph instance, the service node to which the first graph instance is deployed from the plurality of service nodes as the N service nodes;
the management module 803 includes: the method comprises the following steps:
and the deleting submodule is used for sending the graph deleting request to the N service nodes so as to indicate the N service nodes to delete the first graph instance and delete graph data corresponding to the first graph instance, wherein the graph data comprises attribute information of the entities and attribute information of relationships among the entities.
Optionally, the graph data system further comprises a distributed data coordination node;
the graph management apparatus further includes:
and the updating module is used for sending an instance distribution updating request to the distributed data coordination node, wherein the instance distribution updating request carries the identifiers of the N service nodes and the graph name of the first graph instance so as to indicate the distributed data coordination node to update the number and the graph name of the graph instances deployed on the N service nodes.
Optionally, the graph data system further comprises a distributed data coordination node;
the graph management apparatus further includes:
the first acquisition module is used for acquiring the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination node every monitoring time;
and the adjusting module is used for adjusting the graph instances deployed on the plurality of service nodes according to the number of the graph instances deployed on the plurality of service nodes and the graph names when the difference value of the number of the graph instances deployed on any two service nodes in the plurality of service nodes is greater than or equal to 2, so that the difference value of the number of the graph instances deployed on any two service nodes in the plurality of service nodes after adjustment is smaller than 2.
Optionally, the graph data system further comprises a distributed data coordination node;
the graph management apparatus further includes:
the monitoring module is used for monitoring the running states of the plurality of service nodes, and the running states are used for indicating whether the service nodes are abnormal or normal;
a second obtaining module, configured to, when it is determined that an abnormal service node exists among the multiple service nodes, obtain, from the distributed data coordination node, the number and the graph name of the graph instance deployed on the abnormal service node, and the number and the graph name of the graph instance deployed on the remaining service nodes except the abnormal service node;
the deployment module is used for relocating the graph instances deployed on the abnormal service nodes to the rest service nodes according to the number and the graph names of the graph instances deployed on the abnormal service nodes and the number and the graph names of the graph instances deployed on the rest service nodes;
and after the redeployment, the graph instances corresponding to the same graph name are deployed on N service nodes in the remaining service nodes, and the number of the graph instances deployed on any two service nodes in the remaining service nodes after the redeployment is the same or the difference between the number of the graph instances deployed on any two service nodes is 1.
In the embodiment of the application, the graph data system includes a plurality of service nodes and a plurality of monitoring nodes, where the plurality of monitoring nodes include a master monitoring node and at least one slave monitoring node, and when the master monitoring node receives a graph management request sent by a client, N service nodes may be determined from the plurality of service nodes, and then the graph management request is sent only to the N service nodes, so as to instruct the N service nodes to manage a first graph instance according to a graph name of the first graph instance to be managed, which is at least carried in the graph management request, where the first graph instance is used to describe a relationship between a plurality of entities. Since N is greater than or equal to 2 and less than the total number of the plurality of service nodes, that is, the main monitoring node only needs to instruct at least two service nodes to manage the first graph instance, instead of instructing each service node to manage the first graph instance, when a large number of graph instances need to be managed, the problem that the management pressure of each service node is large is not caused.
It should be noted that: in the graph management device based on the graph database according to the above embodiment, when managing data, only the above-mentioned division of the function modules is taken as an example, and in practical applications, the function allocation may be completed by different function modules according to needs, that is, the internal structure of the graph management device based on the graph database is divided into different function modules so as to complete all or part of the above-mentioned functions. In addition, the graph management device based on the graph database provided by the above embodiment and the method embodiment based on the graph management of the graph database belong to the same concept, and the specific implementation process thereof is detailed in the method embodiment and is not described herein again.
Fig. 9 is a flowchart of a graph database-based data storage device according to an embodiment of the present application, and referring to fig. 9, the data storage device is applied to a first service node. The apparatus comprises a receiving module 901 and a storing module 902.
A receiving module 901, configured to receive a data storage request sent by a client, where the data storage request carries first graph data to be stored and a graph name of a second graph instance;
the first service node is one of the service nodes, where the second graph instance is deployed, determined by the client from the multiple service nodes according to the graph name of the second graph instance, the second graph instance is an instance corresponding to the first graph data, the first graph data includes attribute information of at least one entity in multiple entities having a relationship and/or attribute information of the relationship among the multiple entities, and N is greater than or equal to 2 and less than the total number of the multiple service nodes;
a storage module 902, configured to store the first graph data according to the graph name of the second graph instance.
In the embodiment of the application, in the scenario of performing graph management by using the method, a data storage request which is sent by a client and carries the graph name of the first graph data and the second graph instance to be stored can be received by the first service node, and data storage is performed according to the technical scheme that the graph name of the second graph instance stores the first graph data, so that the beneficial effect of accurately storing the graph data in the scenario of performing graph management by using the method is achieved.
It should be noted that: in the data storage device based on a graph database according to the above embodiment, when storing data, only the above-mentioned division of the function modules is taken as an example, and in practical applications, the function allocation may be completed by different function modules according to needs, that is, the internal structure of the data storage device based on a graph database is divided into different function modules to complete all or part of the above-mentioned functions. In addition, the data storage device based on the graph database provided by the above embodiment and the data storage method based on the graph database belong to the same concept, and the specific implementation process thereof is described in detail in the method embodiment and is not described herein again.
Fig. 10 is a flowchart of a data query device based on a graph database according to an embodiment of the present application, and referring to fig. 10, the data query device is applied in a second service node. The apparatus comprises a receiving module 1001 and a querying module 1002.
A receiving module 1001, configured to receive a data query request sent by a client, where the data query request carries a graph name of a third graph instance;
the second service node is one of the N service nodes, where the third graph instance is deployed, determined by the client from the multiple service nodes according to the graph name of the third graph instance, where the third graph instance is an instance corresponding to second graph data to be queried, where the second graph data includes attribute information of at least one entity in multiple entities having a relationship and/or attribute information of a relationship between the multiple entities, and the N is greater than or equal to 2 and less than the total number of the multiple service nodes;
the query module 1002 is configured to perform data query according to the graph name of the third graph instance, and send a query result to the client, where the query result includes the second graph data.
In the embodiment of the application, in the scenario of performing graph management by using the method, a data query request carrying a graph name of a third graph instance and sent by a client may be received by a second service node, data query is performed according to the graph name of the third graph instance, and a query result including second graph data corresponding to the third graph instance is sent to the client for data query, so that the beneficial effect of accurately querying the graph data in the scenario of performing graph management by using the method is achieved.
It should be noted that: in the data query device based on a graph database according to the above embodiment, only the division of the functional modules is used for illustration when querying data, in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the data query device based on a graph database is divided into different functional modules to complete all or part of the functions described above. In addition, the data query device based on the graph database provided by the above embodiment and the data query method based on the graph database belong to the same concept, and the specific implementation process thereof is described in detail in the method embodiment and is not described herein again.
Fig. 11 is a schematic structural diagram of a server according to an embodiment of the present application. The server 1100 may vary widely in configuration or performance and may include one or more Central Processing Units (CPUs) 1122 (e.g., one or more processors) and memory 1132, one or more storage media 1130 (e.g., one or more mass storage devices) storing applications 1142 or data 1144. Memory 1132 and storage media 1130 may be, among other things, transient storage or persistent storage. The program stored on the storage medium 1130 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 1122 may be provided in communication with the storage medium 1130 to execute a series of instruction operations in the storage medium 1130 on the server 1100.
The server 1100 may also include one or more power supplies 1126, one or more wired or wireless network interfaces 1150, one or more input-output interfaces 1158, one or more keyboards 1156, and/or one or more operating systems 1141, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is applied to a terminal, and at least one instruction, at least one segment of program, a code set, or an instruction set is stored in the computer-readable storage medium, and the instruction, the program, the code set, or the instruction set is loaded and executed by a processor, so as to implement the operation in the graph database-based graph management method according to the foregoing embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is applied to a terminal, and at least one instruction, at least one segment of program, a code set, or an instruction set is stored in the computer-readable storage medium, and the instruction, the program, the code set, or the instruction set is loaded and executed by a processor, so as to implement the operation in the data storage method based on a graph database according to the foregoing embodiment.
The embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium is applied to a terminal, and at least one instruction, at least one segment of program, a code set, or an instruction set is stored in the computer-readable storage medium, and the instruction, the program, the code set, or the instruction set is loaded and executed by a processor, so as to implement the operation in the graph database-based data query method according to the above embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (24)

1. A graph management method based on a graph database, wherein a graph data system comprises a plurality of service nodes and a plurality of monitoring nodes, the plurality of monitoring nodes comprise a master monitoring node and at least one slave monitoring node, the method comprises:
the method comprises the steps that a main monitoring node receives a graph management request sent by a client, wherein the graph management request at least carries a graph name of a first graph instance to be managed, and the first graph instance is used for describing the relationship among a plurality of entities;
the master monitoring node determining N serving nodes from the plurality of serving nodes, the N being greater than or equal to 2 and less than a total number of the plurality of serving nodes;
and the main monitoring node sends the graph management request to the N service nodes so as to indicate the N service nodes to manage the first graph instance according to the graph name of the first graph instance.
2. The method of claim 1, wherein the graph management request is a graph creation request, and the graph creation request further carries the identifiers of the plurality of entities and the relationships between the plurality of entities;
the sending, by the master monitoring node, the graph management request to the N service nodes to instruct the N service nodes to manage the first graph instance according to the graph name of the first graph instance, includes:
the master monitoring node sends the graph creation request to the N service nodes to instruct each of the N service nodes to deploy the first graph instance on itself.
3. The method of claim 2, wherein the graph data system further comprises a distributed data coordination node;
the master monitoring node determines N service nodes from the plurality of service nodes, including:
the main monitoring node acquires the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination node;
if the number of the deployed graph instances on each service node is the same, the main monitoring node randomly selects N service nodes from the service nodes;
and if the number of the graph instances deployed on each service node is different, the main monitoring node selects N service nodes from the plurality of service nodes according to the sequence from small to large of the number of the graph instances deployed on each service node.
4. The method of claim 1, wherein the graph management request is a graph deletion request;
the master monitoring node determines N service nodes from the plurality of service nodes, including:
the main monitoring node determines the service nodes with the first graph instance deployed from the plurality of service nodes as the N service nodes according to the graph name of the first graph instance;
the sending, by the master monitoring node, the graph management request to the N service nodes to instruct the N service nodes to manage the first graph instance according to the graph name of the first graph instance, includes:
the main monitoring node sends the graph deletion request to the N service nodes to indicate the N service nodes to delete the first graph instance and delete graph data corresponding to the first graph instance, wherein the graph data comprises attribute information of the entities and attribute information of relationships among the entities.
5. The method of any of claims 1-4, wherein the graph data system further comprises a distributed data coordination node;
after the master monitoring node sends the graph management request to the N service nodes, the method further includes:
and the main monitoring node sends an instance distribution updating request to the distributed data coordination node, wherein the instance distribution updating request carries the identifiers of the N service nodes and the graph name of the first graph instance, so as to indicate the distributed data coordination node to update the number and the graph name of the graph instances deployed on the N service nodes.
6. The method of claim 1, wherein the graph data system further comprises a distributed data coordination node;
the method further comprises the following steps:
the main monitoring node acquires the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination node every monitoring time;
when the difference between the number of the graph instances deployed on any two service nodes in the plurality of service nodes is greater than or equal to 2, the main monitoring node adjusts the graph instances deployed on the plurality of service nodes according to the number and the graph names of the graph instances deployed on the plurality of service nodes, so that the difference between the number of the graph instances deployed on any two service nodes in the plurality of service nodes after adjustment is smaller than 2.
7. The method of claim 1, wherein the graph data system further comprises a distributed data coordination node;
the method further comprises the following steps:
the main monitoring node monitors the running states of the plurality of service nodes, and the running states are used for indicating whether the service nodes are abnormal or normal;
when the main monitoring node determines that an abnormal service node exists in the plurality of service nodes, acquiring the number and the graph name of the graph instances deployed on the abnormal service node and the number and the graph name of the graph instances deployed on the rest service nodes except the abnormal service node from the distributed data coordination node;
the main monitoring node redeploys the graph instances deployed on the abnormal service nodes to the rest service nodes according to the number and the graph names of the graph instances deployed on the abnormal service nodes and the number and the graph names of the graph instances deployed on the rest service nodes;
and after the redeployment, the graph instances corresponding to the same graph name are deployed on N service nodes in the rest service nodes, and the number of the graph instances deployed on any two service nodes in the rest service nodes after the redeployment is the same or the difference between the number of the graph instances deployed on any two service nodes is 1.
8. A method for storing data based on a graph database, wherein the graph data system comprises a plurality of service nodes and a plurality of monitoring nodes, the plurality of monitoring nodes comprises a master monitoring node and at least one slave monitoring node, and the method comprises:
a first service node receives a data storage request sent by a client, wherein the data storage request carries first graph data to be stored and a graph name of a second graph instance;
the first service node is one of N service nodes, where the second graph instance is deployed, determined by the client from the multiple service nodes according to the graph name of the second graph instance, the second graph instance is an instance corresponding to the first graph data, the first graph data includes attribute information of at least one entity in multiple entities with a relationship and/or attribute information of a relationship among the multiple entities, and N is greater than or equal to 2 and less than the total number of the multiple service nodes;
and the first service node stores the first graph data according to the graph name of the second graph instance.
9. A method for querying data based on a graph database, wherein the graph data system comprises a plurality of service nodes and a plurality of monitoring nodes, the plurality of monitoring nodes comprises a master monitoring node and at least one slave monitoring node, the method comprises:
a second service node receives a data query request sent by a client, wherein the data query request carries a graph name of a third graph instance;
the second service node is one service node of N service nodes, where the third graph instance is deployed, determined by the client from the multiple service nodes according to the graph name of the third graph instance, the third graph instance is an instance corresponding to second graph data to be queried, the second graph data includes attribute information of at least one entity in multiple entities with a relationship and/or attribute information of a relationship among the multiple entities, and N is greater than or equal to 2 and less than the total number of the multiple service nodes;
and the second service node carries out data query according to the graph name of the third graph instance and sends a query result to the client, wherein the query result comprises the second graph data.
10. A graph management apparatus based on a graph database, wherein a graph data system includes a plurality of service nodes and a plurality of monitoring nodes, the plurality of monitoring nodes includes a master monitoring node and at least one slave monitoring node, the graph management apparatus is applied to the master monitoring node, and the graph management apparatus includes:
a receiving module, configured to receive a graph management request sent by a client, where the graph management request at least carries a graph name of a first graph instance to be managed, and the first graph instance is used to describe a relationship between multiple entities;
a determining module to determine N serving nodes from the plurality of serving nodes, the N being greater than or equal to 2 and less than a total number of the plurality of serving nodes;
and the management module is used for sending the graph management request to the N service nodes so as to indicate the N service nodes to manage the first graph instance according to the graph name of the first graph instance.
11. The graph management apparatus according to claim 10, wherein the graph management request is a graph creation request, and the graph creation request further carries the identifiers of the plurality of entities and the relationships between the plurality of entities;
the management module comprises:
a deployment submodule, configured to send the graph creation request to the N service nodes, so as to instruct each service node in the N service nodes to deploy the first graph instance on itself.
12. The graph management apparatus of claim 11, wherein the graph data system further comprises a distributed data coordination node;
the determining module comprises:
the obtaining submodule is used for obtaining the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination node;
a first selection submodule, configured to randomly select N service nodes from the plurality of service nodes if the number of the graph instances deployed on each service node is the same;
and the second selection submodule is used for selecting N service nodes from the plurality of service nodes according to the sequence from small to large of the number of the graph instances deployed on each service node if the number of the graph instances deployed on each service node is different.
13. The graph management apparatus according to claim 10, wherein the graph management request is a graph deletion request;
the determining module comprises:
a determining submodule, configured to determine, according to the graph name of the first graph instance, a service node, to which the first graph instance is deployed, from the plurality of service nodes as the N service nodes;
the management module comprises:
and the deleting submodule is used for sending the graph deleting request to the N service nodes so as to indicate the N service nodes to delete the first graph instance and delete graph data corresponding to the first graph instance, wherein the graph data comprises attribute information of the entities and attribute information of relationships among the entities.
14. The graph management apparatus according to any one of claims 10-13, wherein the graph data system further comprises a distributed data coordination node;
the graph management apparatus further includes:
an updating module, configured to send an instance distribution update request to the distributed data coordination node, where the instance distribution update request carries the identifiers of the N service nodes and the graph name of the first graph instance, so as to indicate the distributed data coordination node to update the number of the graph instances and the graph name deployed on the N service nodes.
15. The graph management apparatus of claim 10, wherein the graph data system further comprises a distributed data coordination node;
the graph management apparatus further includes:
the first acquisition module is used for acquiring the number and the graph name of the graph instance deployed on each service node in the plurality of service nodes from the distributed data coordination nodes at intervals of monitoring duration;
and the adjusting module is used for adjusting the graph instances deployed on the plurality of service nodes according to the number of the graph instances deployed on the plurality of service nodes and the graph names when the difference value of the number of the graph instances deployed on any two service nodes in the plurality of service nodes is greater than or equal to 2, so that the difference value of the number of the graph instances deployed on any two service nodes in the plurality of service nodes after adjustment is smaller than 2.
16. The graph management apparatus of claim 10, wherein the graph data system further comprises a distributed data coordination node;
the graph management apparatus further includes:
the monitoring module is used for monitoring the running states of the plurality of service nodes, and the running states are used for indicating whether the service nodes are abnormal or normal;
a second obtaining module, configured to, when it is determined that an abnormal service node exists in the plurality of service nodes, obtain, from the distributed data coordination node, the number and the graph name of the graph instance deployed on the abnormal service node, and the number and the graph name of the graph instance deployed on the remaining service nodes except the abnormal service node;
the deployment module is used for relocating the graph instances deployed on the abnormal service nodes to the rest service nodes according to the number and the graph names of the graph instances deployed on the abnormal service nodes and the number and the graph names of the graph instances deployed on the rest service nodes;
and after the redeployment, the graph instances corresponding to the same graph name are deployed on N service nodes in the rest service nodes, and the number of the graph instances deployed on any two service nodes in the rest service nodes after the redeployment is the same or the difference between the number of the graph instances deployed on any two service nodes is 1.
17. A graph database based data storage apparatus, wherein the graph data system comprises a plurality of service nodes and a plurality of monitoring nodes, the plurality of monitoring nodes comprises a master monitoring node and at least one slave monitoring node, the data storage apparatus is applied in a first service node, the data storage apparatus comprises:
the receiving module is used for receiving a data storage request sent by a client, wherein the data storage request carries first graph data to be stored and a graph name of a second graph instance;
the first service node is one of N service nodes, where the second graph instance is deployed, determined by the client from the multiple service nodes according to the graph name of the second graph instance, the second graph instance is an instance corresponding to the first graph data, the first graph data includes attribute information of at least one entity in multiple entities with a relationship and/or attribute information of a relationship among the multiple entities, and N is greater than or equal to 2 and less than the total number of the multiple service nodes;
and the storage module is used for storing the first graph data according to the graph name of the second graph instance.
18. A data query device based on a graph database, wherein the graph data system comprises a plurality of service nodes and a plurality of monitoring nodes, the plurality of monitoring nodes comprises a master monitoring node and at least one slave monitoring node, the data query device is applied to a second service node, and the data query device comprises:
the receiving module is used for receiving a data query request sent by a client, wherein the data query request carries a graph name of a third graph instance;
the second service node is one service node of N service nodes, where the third graph instance is deployed, determined by the client from the multiple service nodes according to the graph name of the third graph instance, the third graph instance is an instance corresponding to second graph data to be queried, the second graph data includes attribute information of at least one entity in multiple entities with a relationship and/or attribute information of a relationship among the multiple entities, and N is greater than or equal to 2 and less than the total number of the multiple service nodes;
and the query module is used for performing data query according to the graph name of the third graph instance and sending a query result to the client, wherein the query result comprises the second graph data.
19. A graph management apparatus based on a graph database, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of any of the methods of claims 1-7.
20. A data storage device based on a graph database, the data storage device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of claim 8.
21. A data query device based on a graph database, the data query device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to perform the steps of the method of claim 9.
22. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of any of the methods of claims 1-7.
23. A computer readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of the method of claim 8.
24. A computer readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of the method of claim 9.
CN201910476543.3A 2019-06-03 2019-06-03 Graph management, data storage and data query methods, devices and storage medium Active CN112035579B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910476543.3A CN112035579B (en) 2019-06-03 2019-06-03 Graph management, data storage and data query methods, devices and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910476543.3A CN112035579B (en) 2019-06-03 2019-06-03 Graph management, data storage and data query methods, devices and storage medium

Publications (2)

Publication Number Publication Date
CN112035579A true CN112035579A (en) 2020-12-04
CN112035579B CN112035579B (en) 2024-02-20

Family

ID=73576399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910476543.3A Active CN112035579B (en) 2019-06-03 2019-06-03 Graph management, data storage and data query methods, devices and storage medium

Country Status (1)

Country Link
CN (1) CN112035579B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378707A (en) * 2019-07-24 2019-10-25 北京慧眼智行科技有限公司 A kind of information processing method and device
CN113326276A (en) * 2021-06-23 2021-08-31 北京金山数字娱乐科技有限公司 Graph database updating method and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007200349A (en) * 2007-03-26 2007-08-09 Club It Corp Server-client system, load distribution device, load distribution method, and load distribution program
CN102158540A (en) * 2011-02-18 2011-08-17 广州从兴电子开发有限公司 System and method for realizing distributed database
CN103823846A (en) * 2014-01-28 2014-05-28 浙江大学 Method for storing and querying big data on basis of graph theories
CN103929454A (en) * 2013-01-15 2014-07-16 中国移动通信集团四川有限公司 Load balancing storage method and system in cloud computing platform
US20150249583A1 (en) * 2014-03-03 2015-09-03 Microsoft Corporation Streaming query resource control
CN105871994A (en) * 2015-12-15 2016-08-17 乐视网信息技术(北京)股份有限公司 Static file service method and unit
US20180181600A1 (en) * 2016-12-28 2018-06-28 Tmax Cloud Co., Ltd. Method and apparatus for organizing database system in a cloud environment
CN108874528A (en) * 2017-05-09 2018-11-23 北京京东尚科信息技术有限公司 Distributed task scheduling storage system and distributed task scheduling storage/read method
CN109284265A (en) * 2018-09-05 2019-01-29 视联动力信息技术股份有限公司 A kind of date storage method and system
CN109343962A (en) * 2018-10-26 2019-02-15 北京知道创宇信息技术有限公司 Data processing method, device and distribution service
US20190065560A1 (en) * 2017-08-31 2019-02-28 Sap Se Caching for query processing systems

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007200349A (en) * 2007-03-26 2007-08-09 Club It Corp Server-client system, load distribution device, load distribution method, and load distribution program
CN102158540A (en) * 2011-02-18 2011-08-17 广州从兴电子开发有限公司 System and method for realizing distributed database
CN103929454A (en) * 2013-01-15 2014-07-16 中国移动通信集团四川有限公司 Load balancing storage method and system in cloud computing platform
CN103823846A (en) * 2014-01-28 2014-05-28 浙江大学 Method for storing and querying big data on basis of graph theories
US20150249583A1 (en) * 2014-03-03 2015-09-03 Microsoft Corporation Streaming query resource control
CN105871994A (en) * 2015-12-15 2016-08-17 乐视网信息技术(北京)股份有限公司 Static file service method and unit
US20180181600A1 (en) * 2016-12-28 2018-06-28 Tmax Cloud Co., Ltd. Method and apparatus for organizing database system in a cloud environment
CN108874528A (en) * 2017-05-09 2018-11-23 北京京东尚科信息技术有限公司 Distributed task scheduling storage system and distributed task scheduling storage/read method
US20190065560A1 (en) * 2017-08-31 2019-02-28 Sap Se Caching for query processing systems
CN109284265A (en) * 2018-09-05 2019-01-29 视联动力信息技术股份有限公司 A kind of date storage method and system
CN109343962A (en) * 2018-10-26 2019-02-15 北京知道创宇信息技术有限公司 Data processing method, device and distribution service

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LUYAO CHEN等: "Secure large-scale genome data storage and query", COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, vol. 165, pages 129 - 137, XP085506090, DOI: 10.1016/j.cmpb.2018.08.007 *
张千;葛宇飞;梁鸿;: "监控视频云存储系统", 计算机系统应用, no. 10, pages 92 - 96 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378707A (en) * 2019-07-24 2019-10-25 北京慧眼智行科技有限公司 A kind of information processing method and device
CN113326276A (en) * 2021-06-23 2021-08-31 北京金山数字娱乐科技有限公司 Graph database updating method and device

Also Published As

Publication number Publication date
CN112035579B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
CN105100259B (en) A kind of distributed timing task executing method and system
CN105487980B (en) The method and device that repairing applications are operating abnormally
CN106936618B (en) Data acquisition method and system
US20190042659A1 (en) Data writing and reading and apparatus and cloud storage system
CN107545338B (en) Service data processing method and service data processing system
US20100228839A1 (en) Efficient on-demand provisioning of servers for specific software sets
CN105701099B (en) For executing the method, apparatus and system of task in distributed environment
US10944655B2 (en) Data verification based upgrades in time series system
CN111147596B (en) Prometous cluster deployment method, device, equipment and medium
CN110149366B (en) Method and device for improving availability of cluster system and computer equipment
CN113434283B (en) Service scheduling method and device, server and computer readable storage medium
CN111399764A (en) Data storage method, data reading device, data storage equipment and data storage medium
CN113742135A (en) Data backup method and device and computer readable storage medium
CN112035579B (en) Graph management, data storage and data query methods, devices and storage medium
CN109445911B (en) CVM (continuously variable memory) instance adjusting method and device, cloud platform and server
CN112631680B (en) Micro-service container scheduling system, method, device and computer equipment
CN108900435B (en) Service deployment method, device and computer storage medium
CN108574718B (en) Cloud host creation method and device
CN106886452B (en) Method for simplifying task scheduling of cloud system
CN107645396B (en) Cluster capacity expansion method and device
CN117687653A (en) Distribution method and device of system update resources, electronic equipment and storage medium
CN115426356A (en) Distributed timed task lock update control execution method and device
CN115914404A (en) Cluster flow management method and device, computer equipment and storage medium
WO2022220830A1 (en) Geographically dispersed hybrid cloud cluster
CN113485828A (en) Distributed task scheduling system and method based on quartz

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant