CN118445293B - Multistage data linkage method, device and storage medium under complex service scene - Google Patents
Multistage data linkage method, device and storage medium under complex service scene Download PDFInfo
- Publication number
- CN118445293B CN118445293B CN202410903515.6A CN202410903515A CN118445293B CN 118445293 B CN118445293 B CN 118445293B CN 202410903515 A CN202410903515 A CN 202410903515A CN 118445293 B CN118445293 B CN 118445293B
- Authority
- CN
- China
- Prior art keywords
- information
- trigger
- field information
- data table
- sql
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 230000008859 change Effects 0.000 claims abstract description 21
- 230000006870 function Effects 0.000 claims description 25
- 230000014509 gene expression Effects 0.000 claims description 18
- 238000013479 data entry Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 9
- 238000004364 calculation method Methods 0.000 description 12
- 238000013507 mapping Methods 0.000 description 8
- 238000007726 management method Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 5
- 238000012423 maintenance Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013524 data verification Methods 0.000 description 3
- 238000005314 correlation function Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000013024 troubleshooting Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/22—Indexing; Data structures therefor; Storage structures
- G06F16/2282—Tablespace storage structures; Management thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/242—Query formulation
- G06F16/2433—Query languages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/284—Relational databases
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The application discloses a multistage data linkage method, a device and a storage medium under a complex service scene. The method comprises the following steps: reading an entry data table, wherein an entity table and/or an expansion information table mapped by field information are stored in the entry data table; when field information in the entry data table is changed, generating SQL sentences according to the change, storing the SQL sentences into a preconfigured dynamic SQL data table, and marking the execution state of each SQL sentence in the dynamic SQL data table; calling a back-end API interface to execute SQL statements stored in the dynamic SQL data table and marked as unexecuted; the trigger is configured in the target expansion information table, the trigger is preconfigured with a trigger rule and an execution rule, and when the field information in the target expansion information table changes to meet the trigger rule, the corresponding field information in the mapped entity table or the expansion information table or other expansion information tables is changed through the trigger based on the execution rule.
Description
Technical Field
The present application relates to the field of database technologies, and in particular, to a method, an apparatus, and a storage medium for linking multi-level data in a complex service scenario.
Background
In modern enterprise information management systems, data is typically stored in multiple relational database tables, with complex associations and linkages between different tables. For example, in a sales management system, the basic information and extension information of a sales order may be stored in different database tables, and the data between these tables needs to be frequently synchronized and matched. With the complexity of business scenarios, developers face the following challenges in designing and implementing these systems:
Frequent data linkage: there are frequent data entry and update operations between the different tables, which requires the system to be able to synchronize the data accurately in real time. For example, when a user enters data in one table, the data in the other relevant tables needs to be updated at the same time.
Complex data matching: business logic often requires complex calculations and matching from the data in the different tables to ensure consistency and integrity of the data.
Development and communication costs are high: when implementing complex business logic, developers need to frequently communicate and confirm a data linkage and matching method, which easily results in low development efficiency and frequent errors.
The maintenance difficulty is high: along with the change of business demands, the system needs to be continuously adjusted and optimized, and the maintenance workload is huge.
In order to solve the above problems, some systems currently adopt a mode of manually writing SQL sentences and triggers to carry out data linkage and matching, but the method has the following disadvantages:
the efficiency of manually writing SQL sentences is low: the developer needs to write a large number of SQL sentences manually, and the workload is large and error-prone.
The flexibility is poor: the manually written SQL sentences and triggers are difficult to adapt to the continuously changing business requirements, and the modification and expansion costs are high.
Lack of unified management and configuration: SQL sentences and triggers written by different developers are not uniform in style, and centralized management and maintenance are difficult to perform.
Based on the background, the invention provides a novel technical scheme, and the data entry items and the linkage rules are managed in a collocation mode, so that multi-table data linkage and matching are realized, the development and maintenance cost is effectively reduced, and the flexibility and the stability of the system are improved.
Disclosure of Invention
In order to solve the technical problems, the application provides a multistage data linkage method, a multistage data linkage device and a storage medium under a complex service scene.
The first aspect of the application provides a multistage data linkage method under a complex service scene, which comprises the following steps:
Reading an entry data table, wherein a regular check rule is configured in the entry data table, the entry data table is used for recording field information, and an entity table and/or an expansion information table mapped by the field information are stored in the entry data table;
When field information in the entry data table is changed, generating an SQL statement according to the change, storing the SQL statement into a preconfigured dynamic SQL data table, and marking the execution state of each SQL statement in the dynamic SQL data table;
Calling a back-end API interface to execute SQL sentences which are stored in the dynamic SQL data table and marked as non-executed states, so that field information in the mapped target extension information table synchronizes the changes;
And a trigger is configured in the target expansion information table, a trigger rule and an execution rule are preconfigured in the trigger, and when the field information in the target expansion information table changes to meet the trigger rule, the corresponding field information in the mapped entity table or the expansion information table or other expansion information tables is changed through the trigger based on the execution rule.
Optionally, the changing, based on the execution rule, the mapped entity table or the corresponding field information in the extended information table or other extended information tables by the trigger includes:
And changing the same field information in the entity table or other expansion information tables through the trigger.
Optionally, the changing, based on the execution rule, the mapped entity table or the corresponding field information in the extended information table or other extended information tables by the trigger includes:
inquiring a pre-configured field information matching table through the trigger, wherein the field information matching table records association functions among fields;
Extracting the changed field information through the trigger, and transmitting the changed field information to the association function as a parameter;
and calculating output of the associated target field information through the association function, and changing corresponding field information in the entity table or the expansion information table or other expansion information tables through the trigger.
Optionally, when the field information in the entry data table is changed, performing regular check on the input content; the regular check rule is used for verifying whether the format of the input content accords with a predefined specification;
If the input content meets the regular check rule, continuing to execute the subsequent processing flow;
and if the input content does not meet the regular check rule, rejecting the input.
Optionally, the entry data table defines the following fields:
id: a primary key uniquely identifying each data entry;
column_name: a field name of the data entry;
column_type: the data type of the field;
table_name: the table name to which the field belongs;
dictionary_key: dictionary keys for associating data in the dictionary tables;
display_width: displaying the width of the interface;
validization_regex: and checking the regular expression.
Optionally, the field information matching table defines the following fields:
source_table: source table name;
source_column: source table field name;
target_table: a target table name;
target_column: a target table field name;
calculation-formula: and (5) associating the functions.
Optionally, when a service data table needs to be newly inserted, the method further includes:
creating a service data table and configuring information of each field in the service data table;
Configuring the association relation between each field information in the newly added service data table and the association relation between each field information in the service data table and the field information of the existing entity table and/or the expansion information table;
configuring a trigger in the service data table, and detecting the service data table through the trigger;
when the field information in the business data table is changed, the associated entity table or the expansion information table is changed based on the association relation through the trigger.
The second aspect of the present application provides a multi-level data linkage device in a complex service scenario, the device comprising:
The reading unit is used for reading an entry data table, wherein a regular check rule is configured in the entry data table, the entry data table is used for recording field information, and an entity table and/or an expansion information table mapped by the field information are stored in the entry data table;
The SQL sentence generating unit is used for generating an SQL sentence according to the change when the field information in the entry data table is changed, storing the SQL sentence into a preconfigured dynamic SQL data table, and marking the execution state of each SQL sentence in the dynamic SQL data table;
The SQL statement executing unit is used for calling a back-end API interface to execute the SQL statement which is stored in the dynamic SQL data table and marked as an unexecuted state, so that the field information in the mapped target extension information table synchronizes the change;
the trigger unit is used for changing the mapped entity table or the corresponding field information in the expansion information table or other expansion information tables based on the execution rule when the field information in the target expansion information table is changed to meet the trigger rule.
A third aspect of the present application provides a multi-level data linkage in another complex business scenario, the apparatus comprising:
A processor, a memory, an input-output unit, and a bus;
the processor is connected with the memory, the input/output unit and the bus;
The memory holds a program that the processor invokes to perform the method of any of the first aspect and optionally the method of the first aspect.
A fourth aspect of the application provides a computer readable storage medium having stored thereon a program which when executed on a computer performs the method of any of the first aspect and optionally the first aspect.
From the above technical scheme, the application has the following advantages:
1. By configuring regular check rules and SQL generating logic in the entry data table, the system can automatically generate and execute SQL sentences, and automatic synchronization of data is realized. The method reduces manual intervention, reduces error probability and improves data consistency and accuracy.
2. And storing the generated SQL statement into a dynamic SQL data table, and marking the execution state, so that the SQL statement is convenient to manage and monitor. The mechanism enables the system to flexibly process different data changes, and provides higher expandability and maintainability.
3. By managing the data entry items and the linkage rules in a collocation mode, developers do not need to frequently communicate and confirm the data linkage method. This reduces communication costs, improves development efficiency, and is particularly important in complex business scenarios.
4. And configuring a trigger and an execution rule to enable the system to dynamically adjust the data linkage and synchronization strategy according to the service requirement. The system can flexibly adapt to service changes no matter the data table is newly added or the existing data table is modified, and higher flexibility and adaptability are provided.
5. The system is changed and expanded more conveniently through configuration management and automatic execution. The new requirement only needs to modify the configuration or add new configuration items, and the existing codes do not need to be modified greatly, so that maintenance work is greatly simplified.
6. Through a trigger mechanism, when field information in the target expansion information table is changed, the system can trigger corresponding operation in real time, and real-time synchronization and linkage of data are ensured. This improves the real-time performance and processing efficiency of the system.
7. The accuracy and the integrity of data input and change are ensured through strict regular check rules and an automatic SQL execution flow. The consistency and integrity of data are key to the stable operation of the system in complex business scenarios.
8. The dynamic SQL data table intensively records and manages all the generated SQL sentences and the execution state thereof, thereby facilitating the monitoring and auditing of a system administrator. Any problem can be rapidly positioned and solved, and the reliability of the system is ensured.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an embodiment of a multi-level data linkage method in a complex service scenario provided in the present application;
fig. 2 is a schematic flow chart of another embodiment of a multi-level data linkage method in a complex service scenario provided in the present application;
FIG. 3 is a schematic structural diagram of an embodiment of a multi-level data linkage in a complex business scenario provided in the present application;
Fig. 4 is a schematic structural diagram of an embodiment of a multi-level data linkage in another complex service scenario provided in the present application.
Detailed Description
It should be noted that the method provided by the application can be applied to a terminal, a system and a server, for example, the terminal can be a smart phone or a computer, a tablet computer, a smart television, a smart watch, a portable computer terminal and other fixed terminals. For convenience of explanation, the present application is exemplified by using the terminal as the execution subject.
Referring to fig. 1, the present application first provides an embodiment of a multi-level data linkage method in a complex service scenario, where the embodiment includes:
S101, reading an entry data table, wherein a regular check rule is configured in the entry data table, the entry data table is used for recording field information, and an entity table and/or an expansion information table mapped by the field information are stored in the entry data table;
In this step, the system will read a particular database table, the "entry data table". This table is used to record various field information such as data entered by the user. The table is also provided with regular checking rules for verifying whether the input data meets specific format requirements. In addition, the entry data table also stores mappings between these field information and other database tables (e.g., entity and/or extension information tables).
Specific field names, field types, write table names, check regular expressions, etc. may be set in the entry data table. When the user performs data logging on the interface, the system performs data verification according to the configurations.
For example, when a user enters data in a front-end form, the system may generate form fields according to the configuration in the entry data table and perform regular expression verification. For example, the user inputs the following data:
Name: SAMPLE ENTITY A
Description of: THIS IS A SAMPLE description.
Creation time: 2024-06-17 12:00:00
Update time: 2024-06-17 12:00:00
The system will check whether these name fields conform to the regular expression A-Za-z + $.
In an alternative implementation, the entry data table defines the following fields:
id: a primary key uniquely identifying each data entry;
column_name: a field name of the data entry;
column_type: the data type of the field;
table_name: the table name to which the field belongs;
dictionary_key: dictionary keys for associating data in the dictionary tables;
display_width: displaying the width of the interface;
validization_regex: and checking the regular expression.
An example of code for an entry data table structure is provided below:
CREATE TABLE sys_input_columns (
id INT PRIMARY KEY,
column_name VARCHAR(50),
column_type VARCHAR(20),
table_name VARCHAR(50),
dictionary_key VARCHAR(50),
display_width INT,
validation_regex VARCHAR(255)
);
Example data
INSERT INTO sys_input_columns (id, column_name, column_type, table_name, dictionary_key, display_width, validation_regex) VALUES
(1, 'name', 'VARCHAR', 'new_entity', NULL, 20, '^[A-Za-z ]+$'),
(2, 'description', 'VARCHAR', 'new_entity', NULL, 50, NULL),
(3, 'created_at', 'TIMESTAMP', 'new_entity', NULL, 20, NULL),
(4, 'updated_at', 'TIMESTAMP', 'new_entity', NULL, 20, NULL);
S102, when field information in the entry data table is changed, generating SQL sentences according to the change, storing the SQL sentences into a preconfigured dynamic SQL data table, and marking the execution state of each SQL sentence in the dynamic SQL data table;
Whether field information in the entry data table is changed (e.g., insert, update, delete operations) is monitored. These changes are detected by the system when the user modifies and saves fields in the entry data table at the front-end interface. Based on these changes, the system dynamically generates corresponding SQL statements for updating other database tables associated with these fields. These generated SQL statements would be stored in a "dynamic SQL data table" and each would be marked with its execution status (e.g., "not executed").
For example, in an e-commerce system, if the price of a commodity is modified, the system may generate an SQL statement to update the price field in the "commodity entity table". This SQL statement is then added to the dynamic SQL data table and marked as "unexecuted".
An example code for a dynamic SQL data table structure is provided below:
CREATE TABLE fact_dynamic_sql (
id INT PRIMARY KEY,
sql_statement TEXT,
executed BOOLEAN DEFAULT FALSE
);
Example data
INSERT INTO fact_dynamic_sql (sql_statement, executed) VALUES
('INSERT INTO new_entity (id, name, description, created_at, updated_at) VALUES (1, ''Sample Entity'', ''This is a sample description.'', ''2024-06-17 12:00:00'', ''2024-06-17 12:00:00'')', FALSE);
S103, calling a back-end API interface to execute the SQL statement which is stored in the dynamic SQL data table and marked as an unexecuted state, so that the field information in the mapped target extension information table synchronizes the change;
In this step, the system will call the back-end API interface, executing the SQL statement marked as "unexecuted" in the dynamic SQL data table. The purpose of this is to ensure that all the extended information tables associated with the entry data table are updated in time, thereby maintaining data consistency. This step aims at synchronizing changes in the entry information table into the extended information table by executing non-executed SQL statements in the dynamic SQL data table.
Continuing with the e-commerce system as an example, the back-end API interface may be invoked to execute the previously generated SQL statement, thereby updating the price field in the "commodity entity table". Once the SQL statement is executed successfully, its execution state in the dynamic SQL data table will be updated to "executed".
An example of the code for this step is provided below:
SQL not executed by API call query
SELECT id, sql_statement FROM fact_dynamic_sql WHERE executed = FALSE;
Executing SQL statements
For example: performing an insert operation
INSERT INTO new_entity (id, name, description, created_at, updated_at)
VALUES (1, 'Sample Entity', 'This is a sample description.', '2024-06-17 12:00:00', '2024-06-17 12:00:00');
Updating execution state
UPDATE fact_dynamic_sql SET executed = TRUE WHERE id = 1;
And S104, configuring a trigger in the target expansion information table, wherein a trigger rule and an execution rule are preconfigured in the trigger, and when the field information in the target expansion information table is changed to meet the trigger rule, changing the mapped entity table or the corresponding field information in the expansion information table or other expansion information tables through the trigger based on the execution rule.
In the target extension information table, the system configures a trigger. These triggers contain predefined trigger rules and execution rules. When field information in the extended information table is changed and the changes conform to the trigger rules of the trigger, the trigger automatically updates other associated database tables according to the execution rules.
An example of code for updating the respective data tables by triggers is provided below:
-creating a trigger to synchronize related table information when inserting or updating new_entity tables
CREATE TRIGGER trg_sync_related_entity
AFTER INSERT ON new_entity
FOR EACH ROW
BEGIN
INSERT INTO related_entity (entity_id, name, description)
VALUES (NEW.id, NEW.name, NEW.description);
END;
CREATE TRIGGER trg_update_related_entity
AFTER UPDATE ON new_entity
FOR EACH ROW
BEGIN
UPDATE related_entity
SET name = NEW.name, description = NEW.description
WHERE entity_id = NEW.id;
END。
For a better description of the method provided by the present application, the following describes the whole flow of the method by means of a specific example:
Assume a sales management system comprising the following database tables:
sales_order table: basic information of sales orders is stored, corresponding to the associated entity table described in the present application.
Face_sales \u order_ext table: the extended information (e.g., discount, tax, etc.) of the sales order is stored corresponding to the extended information table in the present application.
Sys_input_columns table: and configuring data entry corresponding to the entry data table in the application.
Sys_rows_column mapping table: the mapping relation and the calculation formula between the configuration fields correspond to the field matching information table in the application.
Face_dynamic_sql table: and recording the generated dynamic SQL statement and the execution state thereof, wherein the dynamic SQL statement table corresponds to the dynamic SQL statement table.
In the process of sales order management, the following functions need to be implemented:
Data entry is configured in the sys_input_columns table and data verification is performed.
When the data is saved, SQL statements are generated and executed, and the data is inserted into the corresponding tables.
When order data is changed, related synchronization and calculation operations are automatically performed through triggers.
The changes to the extension information are synchronized into the master order table and the dynamic SQL statement is recorded for subsequent execution.
First, the structure of each table is as follows:
1. sales_order table:
CREATE TABLE sales_order (
id INT PRIMARY KEY,
order_number VARCHAR(50),
customer_name VARCHAR(50),
total_amount DECIMAL(10, 2),
discount DECIMAL(10, 2),
tax DECIMAL(10, 2),
net_amount DECIMAL(10, 2)
);
2. face_sales \u order_ext table:
CREATE TABLE fact_sales_order_ext (
id INT PRIMARY KEY,
order_id INT,
discount DECIMAL(10, 2),
tax DECIMAL(10, 2),
FOREIGN KEY (order_id) REFERENCES sales_order(id)
);
3. sys_input_columns table:
CREATE TABLE sys_input_columns (
id INT PRIMARY KEY,
column_name VARCHAR(50),
column_type VARCHAR(20),
table_name VARCHAR(50),
dictionary_key VARCHAR(50),
display_width INT,
validation_regex VARCHAR(100)
);
INSERT INTO sys_input_columns VALUES
(1, 'discount', 'DECIMAL', 'fact_sales_order_ext', NULL, 10, '^[0-9]+(\.[0-9]{1,2})?$'),
(2, 'tax', 'DECIMAL', 'fact_sales_order_ext', NULL, 10, '^[0-9]+(\.[0-9]{1,2})?$');
4. sys_rows_column mapping table:
CREATE TABLE sys_rows_column_mapping (
source_table VARCHAR(50),
source_column VARCHAR(50),
target_table VARCHAR(50),
target_column VARCHAR(50),
calculation_formula VARCHAR(100)
);
INSERT INTO sys_rows_column_mapping VALUES
('fact_sales_order_ext', 'discount', 'sales_order', 'discount', NULL),
('fact_sales_order_ext', 'tax', 'sales_order', 'tax', NULL),
('fact_sales_order_ext', 'id', 'sales_order', 'net_amount', 'total_amount - discount + tax');
5. face_dynamic_sql table:
CREATE TABLE fact_dynamic_sql (
id INT PRIMARY KEY AUTO_INCREMENT,
sql_statement TEXT,
executed BOOLEAN DEFAULT FALSE
);
The data entry is configured in the sys_input_columns table, specific field names, field types, write table names, check regular expressions, etc. are set. When the user performs data logging on the interface, the system performs data verification according to the configurations.
For example, when a user enters a discount and tax, the system may check whether these fields meet the requirements of the regular expression. If the data does not meet the requirements, the prompt message is popped up.
When the user clicks the save button, the system will generate an SQL statement and insert the data into the corresponding table, while recording the generated SQL statement and related parameters.
The implementation mode is as follows:
Generating and executing insert statements
INSERT INTO fact_sales_order_ext (id, order_id, discount, tax)
VALUES (1, 1, 50.00, 5.00);
Record-generated SQL statements
INSERT INTO fact_dynamic_sql (sql_statement, executed)
VALUES ('INSERT INTO fact_sales_order_ext (id, order_id, discount, tax) VALUES (1, 1, 50.00, 5.00)', FALSE);
Data synchronization and computation by flip-flops
And setting a trigger on the fact_samples_order_ext table to realize data synchronization and calculation operation.
Trigger 1: synchronizing discounts and tax to a samples order table
CREATE TRIGGER trg_update_sales_order
AFTER UPDATE ON fact_sales_order_ext
FOR EACH ROW
BEGIN
Function 1: updating values from the fact_samples_order_ext table to the entity table
UPDATE sales_order
SET discount = NEW.discount,
tax = NEW.tax
WHERE id = NEW.order_id;
END;
Trigger 2: calculate net amount and update to the samples order table
CREATE TRIGGER trg_calculate_net_amount
AFTER UPDATE ON fact_sales_order_ext
FOR EACH ROW
BEGIN
Function 2: calculating new value according to the calculation formula and storing the new value in the fact_samples_order_ext table
DECLARE total_amount DECIMAL(10, 2);
DECLARE discount DECIMAL(10, 2);
DECLARE tax DECIMAL(10, 2);
DECLARE net_amount DECIMAL(10, 2);
SELECT total_amount INTO total_amount FROM sales_order WHERE id = NEW.order_id;
SET discount = NEW.discount;
SET tax = NEW.tax;
SET net_amount = total_amount - discount + tax;
UPDATE sales_order
SET net_amount = net_amount
WHERE id = NEW.order_id;
END;
Trigger 3: recording unexecuted target SQL statements
CREATE TRIGGER trg_record_dynamic_sql
AFTER INSERT ON fact_sales_order_ext
FOR EACH ROW
BEGIN
Function 3: recording the target sql statement into the face_dynamic_sql table
INSERT INTO fact_dynamic_sql (sql_statement, executed)
VALUES ('UPDATE sales_order SET discount = NEW.discount, tax = NEW.tax WHERE id = NEW.order_id', FALSE);
END;
Examples of the code of the trigger include an implementation example of configuring the trigger and an implementation example of executing corresponding changes by the trigger.
Through the above examples, how the whole technical scheme is realized is shown in detail, from the steps of configuring a data entry, generating and executing an SQL statement, using a trigger to realize data synchronization and calculation, and finally calling a back-end interface to execute the SQL statement. The scheme effectively solves the problems of multi-table linkage and frequent data matching in complex business scenes, reduces the communication cost of developers, and improves the automation degree and development efficiency of the system.
In the foregoing embodiments, the entity table or the extended information table mapped by the trigger change is improved, and in practice, there may be multiple implementations, for example:
The implementation mode is as follows: the same field information may be changed in the entity table or other extended information table by the trigger.
Or the implementation manner II: inquiring a pre-configured field information matching table through the trigger, wherein the field information matching table records association functions among fields;
Extracting the changed field information through the trigger, and transmitting the changed field information to the association function as a parameter;
and calculating output of the associated target field information through the association function, and changing corresponding field information in the entity table or the expansion information table or other expansion information tables through the trigger.
In embodiments, the field information in the entity table or the extended information table may be changed by a trigger, and there are various implementations. Two implementations are described in detail below: directly changing the same field information and changing through a field information matching table and an association function.
The implementation mode is as follows: direct change of identical field information
This way the same field information is changed in the entity table or the extended information table by the trigger.
Examples:
Assuming that two tables of the entity_table and the entity_ext_table are provided, when a field in the entity_table is changed, the same field in the entity_ext_table is changed synchronously.
The implementation mode II is as follows: changing through field information matching table and associated function
This approach is more complex and flexible by the trigger querying the preconfigured field information matching table, calculating the output of the associated target field information, and updating the corresponding field information in the entity table or the extended information table.
One specific implementation example is provided below:
-creating a function for calculating target field information from the configuration in the matching table
CREATE FUNCTION calculate_target_value(
source_table VARCHAR,
source_column VARCHAR,
source_value VARCHAR
) RETURNS VARCHAR AS $$
DECLARE
target_value VARCHAR;
transformation_function VARCHAR;
BEGIN
SELECT transformation_function INTO transformation_function
FROM field_mapping_table
WHERE source_table = source_table AND source_column = source_column;
IF transformation_function = 'UPPER' THEN
target_value := UPPER(source_value);
ELSE
Target_value =source_value; -default not to transform
END IF;
RETURN target_value;
END;
The implementation mode requires that a field information table and a trigger are preconfigured, and the specific configuration process is as follows:
creating a field information matching table: and storing the association relation and the calculation formula between the fields.
The field_mapping_table is used to store the mapping relationship between fields and rules of how to perform the conversion. Each record in the table defines a relationship between a source field and a destination field and specifies a transfer function.
Writing a correlation function: and calculating target field information according to the configuration in the field matching table.
Configuration triggers: and calling the association function through the trigger to change.
The application also provides an embodiment of a multi-level data linkage method under a complex business scene, in the method provided by the application, field information is uniformly input through an input item data table, that is to say, a uniform data input mode is regulated, and based on the method, when data is input, the input data can be checked through a regular check rule to judge whether the input data accords with a preset format specification. This embodiment includes:
S201, reading an entry data table, wherein a regular check rule is configured in the entry data table, the entry data table is used for recording field information, and an entity table and/or an expansion information table mapped by the field information are stored in the entry data table;
S202, when field information in the entry data table is changed, performing regular check on input content; the regular check rule is used for verifying whether the format of the input content accords with a predefined specification;
If the input content satisfies the regular check rule, step S203 is executed, and if not, step S206 is executed.
In this embodiment, the regular expression rules first need to be defined, and the regular expression rules of each field are configured in the entry data table.
When the field information is changed, acquiring the content input by the user and performing regular expression verification.
If the input content meets the regular expression rule, continuing to execute the subsequent steps; if not, executing the error processing step.
The following examples illustrate:
Assume that there is a sys_input_columns table containing regular expression rules for fields:
CREATE TABLE sys_input_columns (
id INT PRIMARY KEY AUTO_INCREMENT,
column_name VARCHAR(50),
display_name VARCHAR(50),
input_type VARCHAR(20),
display_width INT,
sort_order INT,
enum_code VARCHAR(50),
regex_ PATTERN VARCHAR (255) -column newly added for storing regular expression rules
);
-Inserting example data
INSERT INTO sys_input_columns (column_name, display_name, input_type, display_width, sort_order, enum_code, regex_pattern) VALUES
('id', 'ID', 'number', 10, 1, NULL, '^\d+$'),
(' Value1', ' number ', 20, 2, NULL, ' d+ (\.\d {1,2 })
('value2', 'Value 2', 'number', 20, 3, NULL, '^\d+(\.\d{1,2})?$'),
('computed_value', 'Computed Value', 'number', 20, 4, NULL, '^\d+(\.\d{1,2})?$');
And configuring a back-end code for regular expression verification when the field information is changed. For example, a Flask application may be created that verifies whether the input content complies with the regular expression rules via/validate _input route.
And the front end or the back end executes corresponding operation according to the verification result. If the input content satisfies the regular expression rule, continuing to execute the subsequent step (S203); if not, an error processing step is performed (S206).
S203, generating SQL sentences according to the changes, storing the SQL sentences into a preconfigured dynamic SQL data table, and marking the execution state of each SQL sentence in the dynamic SQL data table;
s204, calling a back-end API interface to execute the SQL statement which is stored in the dynamic SQL data table and marked as an unexecuted state, so that the field information in the mapped target extension information table synchronizes the change;
S205, configuring a trigger in the target expansion information table, wherein the trigger is preconfigured with a trigger rule and an execution rule, and when field information in the target expansion information table changes to meet the trigger rule, changing the mapped entity table or corresponding field information in the expansion information table or other expansion information tables through the trigger based on the execution rule.
S206, rejecting the input.
In the step, when the input does not meet the regular check rule, an error step can be executed, wherein the error step comprises processing the error of the content input by the user, so that the user is ensured to correct the error in time, and the input requirement is met. The following are some possible erroneous steps:
displaying an error prompt: and informing the user that the input content is not in accordance with the requirement, and displaying specific error information.
Preventing commit operations: preventing undesirable data from being committed or saved to the database.
Focus error field: and moving the cursor to the input field with errors, so that the user can modify the input field conveniently.
Highlighting error field: by a change in color or pattern, the erroneous input field is highlighted, alerting the user to the attention.
Recording an error log: error information is recorded in a log file or database for subsequent analysis and troubleshooting.
Providing the correct example: examples or prompts that are satisfactory are provided to assist the user in understanding the correct input format.
In another alternative embodiment, when a new service data table needs to be inserted, the method further includes: creating a service data table and configuring information of each field in the service data table; configuring the association relation between each field information in the newly added service data table and the association relation between each field information in the service data table and the field information of the existing entity table and/or the expansion information table; configuring a trigger in the service data table, and detecting the service data table through the trigger; when the field information in the business data table is changed, the associated entity table or the expansion information table is changed based on the association relation through the trigger.
In this embodiment, how to configure the table fields, set the association relationships, and use of triggers is described in further detail for the newly inserted service data table. The following is a detailed explanation of each step:
1. Creating a service data table and configuring field information
First, a new service data table is created and the information of each field in the table is configured.
2. Association relation between configuration field information
And configuring information of each field in the new table, wherein the information comprises association relations among the fields and association relations between the fields and the existing entity table and the extension information table.
3. Configuring a trigger and detecting a field information change by the trigger
And configuring a trigger in the new service data table, and detecting and updating other tables based on the association relation through the trigger when the field information is changed.
Example scenario:
Assume that a new service table new_service_table is added, which includes the following fields: ticket amount (ticket_amountd), fuel surcharge (fuel_ surcharge), airport fee (air_fe), and total (total_amountd). The existing tax calculation table tax_ calculation _table contains the fields: tax amount to be paid (tax_amountd_due), tax amount to be paid for the tax amount to be paid (prepaid _tax_amountd), and a data trace field.
The implementation process comprises the following steps:
new table creation and configuration:
When a new table is created, the field ticket_current is annotated as "tax calculation".
The fields fuel_ surcharge and air_fee are annotated as "tax pays for generation".
Configuration association information:
the field information of the new table is configured in the sys_input_columns table.
And configuring the association relation between the new table and the tax calculation table in the sys_rows_column_mapping table.
Trigger creation:
and when creating a trigger and inserting or updating new table data, automatically updating relevant fields of the tax calculation table.
Automatically generating SQL sentences and executing:
when a new table inserts or updates data, the trigger automatically generates an SQL statement and stores it in the face_dynamic_sql table, and is labeled "PENDING".
The back-end API calls the timed task to execute the unexecuted SQL statement and updates the state to "EXECUTED".
Through the steps, the automatic association and data updating of the new table and the existing table are realized, and the automation and the efficiency of data processing are improved
The foregoing describes in detail the method provided by the present application, and the following describes the apparatus provided by the present application:
referring to fig. 3, the present application provides a multi-level data linkage device in a complex service scenario, the device includes:
The reading unit 301 is configured to read an entry data table, where a regular check rule is configured in the entry data table, the entry data table is used to record field information, and an entity table and/or an extension information table mapped by the field information is stored in the entry data table;
The SQL sentence generating unit 302 is configured to generate an SQL sentence according to a change when field information in the entry data table is changed, store the SQL sentence into a preconfigured dynamic SQL data table, and mark an execution state of each SQL sentence in the dynamic SQL data table;
An SQL statement executing unit 303, configured to call a back-end API interface to execute an SQL statement that is stored in the dynamic SQL data table and marked as an unexecuted state, so that field information in the mapped target extension information table synchronizes the change;
And a trigger unit 304, configured with a trigger in the target extension information table, in which a trigger rule and an execution rule are preconfigured, where the trigger unit is configured to change, based on the execution rule, the mapped entity table or the corresponding field information in the extension information table or other extension information tables when the field information in the target extension information table changes to meet the trigger rule.
Optionally, the trigger unit 304 is specifically configured to change the same field information in the entity table or other extended information table through the trigger.
Optionally, the trigger unit 304 is specifically configured to query a pre-configured field information matching table through the trigger, where a correlation function between fields is recorded in the field information matching table;
Extracting the changed field information through the trigger, and transmitting the changed field information to the association function as a parameter;
and calculating output of the associated target field information through the association function, and changing corresponding field information in the entity table or the expansion information table or other expansion information tables through the trigger.
Optionally, the method further comprises a regular check unit 305, configured to perform a regular check on the input content when the field information in the entry data table is changed; the regular check rule is used for verifying whether the format of the input content accords with a predefined specification;
If the input content meets the regular check rule, continuing to execute the subsequent processing flow;
and if the input content does not meet the regular check rule, rejecting the input.
Optionally, the entry data table defines the following fields:
id: a primary key uniquely identifying each data entry;
column_name: a field name of the data entry;
column_type: the data type of the field;
table_name: the table name to which the field belongs;
dictionary_key: dictionary keys for associating data in the dictionary tables;
display_width: displaying the width of the interface;
validization_regex: and checking the regular expression.
Optionally, the field information matching table defines the following fields:
source_table: source table name;
source_column: source table field name;
target_table: a target table name;
target_column: a target table field name;
calculation-formula: and (5) associating the functions.
The application also provides a multi-level data linkage device under the complex service scene, which comprises:
a processor 401, a memory 402, an input/output unit 403, and a bus 404;
the processor 401 is connected to the memory 402, the input/output unit 403, and the bus 404;
The memory 402 stores a program, and the processor 401 invokes the program to execute the multi-level data linkage method in any of the above complex business scenarios.
The application also relates to a computer readable storage medium, wherein the computer readable storage medium stores a program, and when the program runs on a computer, the program causes the computer to execute the multi-level data linkage method under any complex service scene.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM, random access memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Claims (10)
1. The multistage data linkage method under the complex service scene is characterized by comprising the following steps:
Reading an entry data table, wherein a regular check rule is configured in the entry data table, the entry data table is used for recording field information, and an entity table and/or an expansion information table mapped by the field information are stored in the entry data table;
When field information in the entry data table is changed, generating an SQL statement according to the change, storing the SQL statement into a preconfigured dynamic SQL data table, and marking the execution state of each SQL statement in the dynamic SQL data table;
Calling a back-end API interface to execute SQL sentences which are stored in the dynamic SQL data table and marked as non-executed states, so that field information in the mapped target extension information table synchronizes the changes;
And a trigger is configured in the target expansion information table, a trigger rule and an execution rule are preconfigured in the trigger, and when the field information in the target expansion information table changes to meet the trigger rule, the corresponding field information in the mapped entity table or the expansion information table or other expansion information tables is changed through the trigger based on the execution rule.
2. The method according to claim 1, wherein changing, based on the execution rule, the mapped entity table or the corresponding field information in the extended information table or other extended information tables by the trigger includes:
And changing the same field information in the entity table or other expansion information tables through the trigger.
3. The method according to claim 1, wherein changing, based on the execution rule, the mapped entity table or the corresponding field information in the extended information table or other extended information tables by the trigger includes:
inquiring a pre-configured field information matching table through the trigger, wherein the field information matching table records association functions among fields;
Extracting the changed field information through the trigger, and transmitting the changed field information to the association function as a parameter;
and calculating output of the associated target field information through the association function, and changing corresponding field information in the entity table or the expansion information table or other expansion information tables through the trigger.
4. The multi-level data linkage method under the complex business scene according to claim 1, wherein when field information in the entry data table is changed, regular check is performed on the input content; the regular check rule is used for verifying whether the format of the input content accords with a predefined specification;
If the input content meets the regular check rule, continuing to execute the subsequent processing flow;
and if the input content does not meet the regular check rule, rejecting the input.
5. A multi-level data linkage method in a complex business scenario according to claim 3, wherein the entry data table defines the following fields:
id: a primary key uniquely identifying each data entry;
column_name: a field name of the data entry;
column_type: the data type of the field;
table_name: the table name to which the field belongs;
dictionary_key: dictionary keys for associating data in the dictionary tables;
display_width: displaying the width of the interface;
validization_regex: and checking the regular expression.
6. The method for linking multi-level data in complex business scenario according to claim 3, wherein said field information matching table defines the following fields:
source_table: source table name;
source_column: source table field name;
target_table: a target table name;
target_column: a target table field name;
calculation-formula: and (5) associating the functions.
7. The method for multi-level data linkage in a complex business scenario according to claim 1, wherein when a new business data table needs to be inserted, the method further comprises:
creating a service data table and configuring information of each field in the service data table;
Configuring the association relation between each field information in the newly added service data table and the association relation between each field information in the service data table and the field information of the existing entity table and/or the expansion information table;
configuring a trigger in the service data table, and detecting the service data table through the trigger;
When the field information in the service data table is changed, the associated entity table or the expansion information table is changed based on the association relation through the trigger.
8. A multi-level data linkage in a complex business scenario, the apparatus comprising:
The reading unit is used for reading an entry data table, wherein a regular check rule is configured in the entry data table, the entry data table is used for recording field information, and an entity table and/or an expansion information table mapped by the field information are stored in the entry data table;
The SQL sentence generating unit is used for generating an SQL sentence according to the change when the field information in the entry data table is changed, storing the SQL sentence into a preconfigured dynamic SQL data table, and marking the execution state of each SQL sentence in the dynamic SQL data table;
The SQL statement executing unit is used for calling a back-end API interface to execute the SQL statement which is stored in the dynamic SQL data table and marked as an unexecuted state, so that the field information in the mapped target extension information table synchronizes the change;
the trigger unit is used for changing the mapped entity table or the corresponding field information in the expansion information table or other expansion information tables based on the execution rule when the field information in the target expansion information table is changed to meet the trigger rule.
9. A multi-level data linkage in a complex business scenario, the apparatus comprising:
A processor, a memory, an input-output unit, and a bus;
the processor is connected with the memory, the input/output unit and the bus;
the memory holds a program which the processor invokes to perform the method of any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a program which, when executed on a computer, performs the method according to any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410903515.6A CN118445293B (en) | 2024-07-08 | 2024-07-08 | Multistage data linkage method, device and storage medium under complex service scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410903515.6A CN118445293B (en) | 2024-07-08 | 2024-07-08 | Multistage data linkage method, device and storage medium under complex service scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118445293A CN118445293A (en) | 2024-08-06 |
CN118445293B true CN118445293B (en) | 2024-08-30 |
Family
ID=92318001
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410903515.6A Active CN118445293B (en) | 2024-07-08 | 2024-07-08 | Multistage data linkage method, device and storage medium under complex service scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118445293B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117033417A (en) * | 2023-08-21 | 2023-11-10 | 支付宝(杭州)信息技术有限公司 | Service data processing method, medium and computer equipment |
CN117113392A (en) * | 2023-08-10 | 2023-11-24 | 支付宝(杭州)信息技术有限公司 | Private data processing method, device, computer equipment and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060200774A1 (en) * | 2000-12-15 | 2006-09-07 | Flam Ran J | Configuring activities to perform operations on user-defined fields |
-
2024
- 2024-07-08 CN CN202410903515.6A patent/CN118445293B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117113392A (en) * | 2023-08-10 | 2023-11-24 | 支付宝(杭州)信息技术有限公司 | Private data processing method, device, computer equipment and storage medium |
CN117033417A (en) * | 2023-08-21 | 2023-11-10 | 支付宝(杭州)信息技术有限公司 | Service data processing method, medium and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN118445293A (en) | 2024-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8626702B2 (en) | Method and system for validation of data extraction | |
CN105144080B (en) | System for metadata management | |
CN108536761A (en) | Report data querying method and server | |
US20090119476A1 (en) | Data migration | |
WO2006026659A2 (en) | Services oriented architecture for data integration services | |
US6915313B2 (en) | Deploying predefined data warehouse process models | |
US20230360145A1 (en) | Method and system for processing transactions based on transaction archetypes | |
CN103646049A (en) | Method and system for automatically generating data report | |
CN109299074A (en) | A kind of data verification method and system based on templating data base view | |
WO2023098462A1 (en) | Improving performance of sql execution sequence in production database instance | |
CN114168565B (en) | Backtracking test method, device and system of business rule model and decision engine | |
CN113590115B (en) | Automatic generation method and device for service system codes | |
CN110781235A (en) | Big data based purchase data processing method and device, terminal and storage medium | |
CN118445293B (en) | Multistage data linkage method, device and storage medium under complex service scene | |
CN102866985B (en) | For data formatter and the method for on-line analysing processing system | |
CN110704635B (en) | Method and device for converting triplet data in knowledge graph | |
US20240036890A1 (en) | System and method of a modular framework for configuration and reuse of web components | |
CN113326401B (en) | Method and system for generating field blood relationship | |
US20130262378A1 (en) | Aggregation point for enterprise business application binding | |
US20230195792A1 (en) | Database management methods and associated apparatus | |
CN114860819A (en) | Method, device, equipment and storage medium for constructing business intelligent system | |
Melomed et al. | Microsoft SQL Server 2005 Analysis Services | |
CN112258151A (en) | Reconciliation method and device based on pandas, computer equipment and storage medium | |
CN115952174B (en) | Data table connection method, system, terminal and storage medium | |
KR101975998B1 (en) | Apparatus and Method for Data Migration Based on SQL sentences |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |