[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114138838B - Data processing method, device, equipment and medium - Google Patents

Data processing method, device, equipment and medium Download PDF

Info

Publication number
CN114138838B
CN114138838B CN202111485148.5A CN202111485148A CN114138838B CN 114138838 B CN114138838 B CN 114138838B CN 202111485148 A CN202111485148 A CN 202111485148A CN 114138838 B CN114138838 B CN 114138838B
Authority
CN
China
Prior art keywords
target data
data
queue
database
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111485148.5A
Other languages
Chinese (zh)
Other versions
CN114138838A (en
Inventor
单璐瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shengdoushi Shanghai Science and Technology Development Co Ltd
Original Assignee
Shengdoushi Shanghai Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shengdoushi Shanghai Technology Development Co Ltd filed Critical Shengdoushi Shanghai Technology Development Co Ltd
Priority to CN202111485148.5A priority Critical patent/CN114138838B/en
Publication of CN114138838A publication Critical patent/CN114138838A/en
Application granted granted Critical
Publication of CN114138838B publication Critical patent/CN114138838B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/248Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本公开提供了一种数据处理方法及装置、设备和介质,涉及计算机技术领域,尤其涉及数据处理领域。实现方案为:响应于接收到目标数据的写请求,将所述目标数据存储至缓存,并且将所述目标数据存储至消息队列;从所述消息队列中读取所述目标数据;将从所述消息队列中所读取的所述目标数据写入数据库;以及响应于接收到查询请求,从所述缓存和/或数据库中读取所述目标数据。

The present disclosure provides a data processing method, device, equipment and medium, which relates to the field of computer technology, and in particular to the field of data processing. The implementation scheme is: in response to receiving a write request for target data, storing the target data in a cache, and storing the target data in a message queue; reading the target data from the message queue; writing the target data read from the message queue into a database; and in response to receiving a query request, reading the target data from the cache and/or database.

Description

Data processing method, device, equipment and medium
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to the field of data processing, and more particularly, to a method, an apparatus, an electronic device, a computer readable storage medium, and a computer program product for data processing.
Background
The data processing system comprises a cache and a database, and the data processing system may be distributed in different machine rooms, and each machine room corresponds to a different cache and is connected with the same database. When the data traffic is large, the performance of the database will be affected.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, the problems mentioned in this section should not be considered as having been recognized in any prior art unless otherwise indicated.
Disclosure of Invention
The present disclosure provides a method, apparatus, electronic device, computer readable storage medium and computer program product for data processing.
According to one aspect of the disclosure, a data processing method is provided, comprising storing target data in a cache and storing the target data in a message queue in response to receiving a write request for the target data, reading the target data from the message queue, writing the target data read from the message queue into a database, and reading the target data from the cache and/or database in response to receiving a query request.
According to another aspect of the present disclosure, there is provided a data processing apparatus including a cache configured to store data, a database configured to store data, a message queue configured to store data, a storage unit configured to store target data to the cache in response to receiving a write request for target data, a producer configured to store target data to the message queue in response to receiving a write request for target data, a consumer configured to read the target data from the message queue, a write unit configured to write the target data read from the message queue to the database, and a query unit configured to read the target data from the cache and/or database in response to receiving a query request.
According to another aspect of the present disclosure, there is provided an electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the data processing method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the above-described data processing method.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program, wherein the computer program, when being executed by a processor, implements the above-mentioned data processing method
According to one or more embodiments of the present disclosure, stability of a data processing system may be improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The accompanying drawings illustrate exemplary embodiments and, together with the description, serve to explain exemplary implementations of the embodiments. The illustrated embodiments are for exemplary purposes only and do not limit the scope of the claims. Throughout the drawings, identical reference numerals designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a flow chart of a data processing method according to an exemplary embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a data processing method according to an exemplary embodiment of the present disclosure;
FIG. 3 illustrates a block diagram of a data processing apparatus according to an exemplary embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a data processing process according to an exemplary embodiment of the present disclosure;
Fig. 5 shows a block diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
In the related art, the data processing is implemented by ensuring that the database is highly available, and the data is synchronized to the database in the whole data processing flow. The disadvantage of this implementation is that when the data traffic is large, frequent read and write operations will degrade the performance of the database, and when the database is not available, the entire data processing flow will be blocked, while the maintenance cost of the database is increased.
In order to solve the above-mentioned problems, the present disclosure provides a data processing method, which asynchronously writes target data into a database by synchronously writing the target data into a cache and introducing a message queue to decouple a writing operation of the database. According to the method, the target data is synchronously written into the cache and the database writing operation of the message queue decoupling database is introduced, so that the pressure of peak data flow to the database can be reduced, the data processing performance is improved, meanwhile, the dependence of a data processing system on each component is reduced, the data processing can be normally executed when any one component in the cache, the message queue and the database is unavailable, and the stability of the system is improved.
The data processing method disclosed by the invention can be applied to an order processing system, for example, but is not limited to the application, the whole order state circulation and the data are synchronously written into a cache by adopting the method in the embodiment of the invention, and the database is asynchronously written into the database by a message queue, so that the pressure of the database can be relieved, the dependence of the order process on each component is relieved, the order can be successfully ordered even if any component in the cache, the message queue and the database is unavailable in the order peak flow period, and the stability of the order processing system is improved.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a data processing method according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the data processing method includes a step S101 of storing target data in a buffer memory and storing the target data in a message queue in response to receiving a write request of the target data, a step S102 of reading the target data from the message queue, a step S103 of writing the target data read from the message queue into a database, and a step S104 of reading the target data from the buffer memory and/or the database in response to receiving a query request.
Therefore, the target data is synchronously written into the cache, and the writing operation of the message queue decoupling database is introduced, so that the target data is asynchronously written into the database, the dependence of a data processing system on each component can be reduced, and when the data flow is large, the normal execution of data processing can be ensured as long as the data is successfully written into at least any one of the cache, the message queue and the database, so that the pressure of the database can be reduced, and the data processing performance is improved.
In one example, the target data may be read from the cache and/or the database by comparing the target data in the cache with the target data in the database, and using the target data in the latest state as the query result. And when the target data cannot be read from the database, taking the target data in the cache as a query result. Therefore, the query of target data can be realized, the pressure of large data flow to a database is reduced, and the data processing performance is improved.
The cache may be, for example, a redis clustered distributed cache, but is not limited thereto, and may also be, for example, mongodb, couchDB or other non-relational databases. The message queue may be rabbitMq message queues, and the database may be MySQL relational database, but is not limited thereto, so long as the functions described above can be implemented, and the specific implementation forms of the cache, the message queue, and the database are not limited.
According to some embodiments, the message queue includes a delay queue and a target queue with respective target consumers to which the target data is stored, the target consumers being configured to automatically read the target data from the target queue. In this case, the message queue storing the target data may include transmitting the target data to the delay queue in response to a failure of writing the target data read from the target queue to a database, and retransmitting the target data in the delay queue to the target queue in response to a first preset condition being satisfied.
Therefore, when the target data read from the target queue automatically fails to be written into the database, the target data is written into the delay queue first to wait for the retrying of the database, so that the target data can be prevented from being lost, the data integrity is protected, and the requirements of actual application scenes can be fully met.
In some embodiments, the target data in the delay queue may be resent to the target queue for a period of time each interval waiting to be automatically consumed by the target consumer to retry writing to the library.
Accordingly, according to some embodiments, the message queue storing the target data includes recording a corresponding start time for the target data to be sent to the delay queue in response to each time the target data fails to be written into the database, wherein the first preset condition includes that a time interval between a current time and the start time reaches a preset duration.
The first preset condition is equivalent to that the waiting time of the target data in the delay queue reaches a preset threshold value, and the first preset condition can be flexibly adjusted according to the requirements of actual application scenes, so that the data processing performance is improved. For example, a threshold value of the number of times of data retry writing to the database may be set, so that the wireless loop retry writing of target data to the database can be avoided, and the pressure of the database is increased.
Based on this, according to some embodiments, the message queue includes a dead letter queue, data in the dead letter queue is not automatically consumed, and wherein the message queue storing the target data includes sending the target data to the dead letter queue in response to a cumulative number of times the target data was sent from the target queue to a delay queue reaching a threshold. Therefore, the target data can be temporarily stored in the dead signal queue when the target data fails to be written into the database for many times, the pressure of the target data to the database during the peak period of the data flow is relieved while the loss of the target data is avoided, and the data processing performance is improved.
Further, according to some embodiments, the reading the target data from the message queue in step S102 includes triggering a dead letter consumer to read the target data from the dead letter queue in response to a second preset condition being met, and the writing the target data read from the message queue into a database in step S103 includes writing the target data read from the dead letter queue into a database. Therefore, data loss can be effectively avoided, and data integrity is fully protected.
According to some embodiments, the second preset condition includes the database being a writable data state. The writable data state can be an unavailable fault state and a recovery state of the corresponding database under an extreme scene, so that data loss can be effectively avoided, and data integrity is fully protected.
It can be appreciated that the second preset condition is not limited to this, for example, the second preset condition may include that the target data flow is smaller than a certain threshold, so that the data processing operation during the peak period and the valley period of the data flow can be balanced, the hardware performance of the database read-write is fully utilized, and the performance of the data processing is improved.
For example, in response to a second preset condition being met, a dead letter consumer may be manually triggered to read the target data from the dead letter queue.
According to some embodiments, the data processing method further comprises, in response to a failure of the target data to be stored in the message queue, directly writing the target data to the database. Therefore, when the message queue is unavailable, the target data can be synchronously written into the database, the dependence of the data processing system on the message queue is reduced, and the stability of the system is improved.
According to some embodiments, the reading the target data from the cache and/or database in response to receiving a query request includes reading the target data from the database in response to a failure of the target data to be stored in the cache. Therefore, the database can be directly penetrated and queried when the cache is not available, the dependence of the data processing system on the cache is reduced, and the stability of the system is improved.
According to some embodiments, the reading the target data from the cache and/or database in response to receiving a query request includes reading the target data from the cache in response to not reading the target data from the database. Therefore, the query can be realized by utilizing the cache when the database is unavailable, the dependence of the data processing system on the database is reduced, and the stability of the system is improved. Meanwhile, multiple attempts to query the database can be avoided, and the pressure on the database is reduced.
According to some embodiments, the cache includes a plurality of sub-caches, and storing the target data to the cache in step S101 includes storing the target data to each sub-cache. Therefore, when any sub-cache is not available, data can be queried from other sub-caches, particularly when a database is not available, the influence on the data query performance can be avoided, the dependence on the sub-caches is further reduced, and the performance and the stability of the system can be improved.
According to some embodiments, the target data includes a data state, and in response to receiving a query request, reading the target data from the cache and/or database in step S104 includes determining, as a query result, target data having a latest data state among target data read from the cache and database based on the data state of the target data in response to reading the target data corresponding to the query request from both the cache and database. Therefore, the target data with the latest data state can be used as the query result, and the accuracy of the query result is ensured.
According to some embodiments, when the target data is order data, the data processing system may be an order processing system, and processing of the order data is implemented accordingly. The pressure on the order database is relieved in the consumption peak period, so that the requirements of actual application scenes can be fully met. For example, the data state of the order data may include an order state, such as a to-pay state, a paid state, a cancelled order state, and so forth.
According to some embodiments, where the target data is order data, the target data includes a target order state, and the method further comprises, prior to writing the target data to a database, obtaining a latest order state in the cache and/or database associated with the target data, wherein writing the target data to the database is performed in response to the latest order state being a pre-state of the target order state. Therefore, the correctness of the target data written into the database can be ensured by checking the preset order state circulation conditions, the data with incorrect order states is prevented from being written into the database, and the correctness and the integrity of the data are ensured, so that the requirements of actual application scenes can be fully met.
According to some embodiments, in the case where the target data is order data, the target data includes a target order state, and the message queue includes a delay queue, the data processing method further includes sending the target data to the delay queue in response to the most recent order state not being a pre-state of the target order state. Therefore, the data with incorrect order state is temporarily stored in the delay queue to wait for retrying the writing library, so that the correctness and the integrity of the data can be ensured, and the requirements of actual application scenes can be fully met.
In one example, the target order status may be used to represent a corresponding payment status for the order, may be a to-be-paid status, a cancelled order status, or the like. The writing of the target data into the database is performed in response to the latest order state being a pre-state of the target order state, and may be performed based on a preset order state circulation condition. For example, when the target order state is a paid state, the writing operation on the target data can be performed only when the latest order state related to the target data in the cache and/or the database is a state to be paid, and when the latest order state is an order cancel state, the corresponding writing operation cannot be performed. Therefore, the correctness of the target data written into the database can be ensured by checking the preset order state circulation conditions, and the requirements of actual application scenes are fully met.
According to some embodiments, in the case where the target data is order data, the data state includes an order state, and in response to receiving a query request, the reading the target data from the cache and/or database in step S104 includes determining, as a query result, order data having a latest order state among the order data read from the cache and database, based on the order state of the order data, in response to reading the order data corresponding to the query request from both the cache and database. Therefore, the order data with the latest order state can be used as the query result, and the accuracy of the order query result is ensured.
Fig. 2 shows a flowchart of a data processing method according to an exemplary embodiment of the present disclosure.
As shown in fig. 2, the data processing method includes:
Step S201, in response to receiving a write request of target data, storing the target data in a cache, and storing the target data in a target queue in a message queue, wherein the message queue further comprises a delay queue and a dead message queue;
Step S202, the target consumer automatically reads the target data from the target queue;
writing the target data read from the target queue into a database, and judging whether the target data is successfully stored in the database;
And ending in response to successful storage of the target data in the database.
In response to unsuccessful storage of the target data into a database, determining whether a number of times the target data is sent from a target queue to a delay queue reaches a threshold;
In response to the number of times the target data is sent from the target queue to the delay queue not reaching a threshold, executing step S203 to send the target data to the delay queue;
In response to the first preset condition being met (for example, the duration of time after the target data is sent to the delay queue reaches a preset value), step S204 is executed to send the target data in the delay queue to the target queue again, so as to wait for being automatically read again by the target consumer;
In response to the number of times the target data is sent from the target queue to the delay queue reaching a threshold, performing step S205 to send the target data to the dead queue;
in response to the second preset condition being met (e.g., database recovery writable), step S206 is performed to manually trigger the dead letter consumer to read the target data from the dead letter queue, and then step S207 is performed to write the target data read from the dead letter queue to the database.
According to another aspect of the present disclosure, there is also provided a data processing apparatus. Fig. 3 shows a block diagram of a data processing apparatus according to an exemplary embodiment of the present disclosure, as shown in fig. 3, the data processing apparatus 300 includes a buffer 301 configured to store data, a database 302 configured to store data, a message queue 303 configured to store data, a storage unit 304 configured to store target data to the buffer 301 in response to receiving a write request of the target data, a producer 305 configured to store the target data to the message queue 303 in response to receiving the write request of the target data, a consumer 306 configured to read the target data from the message queue 303, a writing unit 307 configured to write the target data read from the message queue 303 to the database 302, and a query unit 308 configured to read the target data from the buffer 301 and/or the database 302 in response to receiving the query request. Therefore, the target data is synchronously written into the cache, and the message queue is introduced to decouple the writing operation of the database, so that the target data is asynchronously written into the database, the dependence of the data processing system on each component can be reduced, and when the data flow is large, the normal execution of data processing can be ensured as long as the data is successfully written into at least any one of the cache, the message queue and the database, thereby reducing the pressure of the database and improving the data processing performance.
According to some embodiments, the consumers 306 comprise target consumers, the message queue 303 comprises a delay queue and a target queue with corresponding target consumers, the producer 305 is configured to store the target data to the target queue, the target consumers are configured to automatically read the target data from the target queue, the writing unit is configured to write the target data read from the target queue to a database, and wherein the message queue 303 further comprises a first sending unit configured to send the target data to the delay queue in response to a failure of writing the target data to the database, and a second sending unit configured to send the target data in the delay queue to the target queue again in response to a first preset condition being met. The specific content of the first preset condition is consistent with that described above, and will not be described herein. Therefore, when the target data fails to be written into the database, the target data can be re-tried to be written into the database, the target data is prevented from being lost, the data integrity is protected, and the requirements of actual application scenes can be fully met.
According to some embodiments, the message queue 303 comprises a dead letter queue, data in the dead letter queue is not automatically consumed, and wherein the message queue 303 further comprises a third sending unit configured to send the target data to the dead letter queue in response to the cumulative number of times the target data is sent from the target queue to a delay queue reaching a threshold. Therefore, the target data can be temporarily stored when the target data fails to be written into the database for many times, the pressure on the database during the peak period of the data flow is relieved while the loss of the target data is avoided, and the data processing performance is improved.
According to some embodiments, the consumer 306 further comprises a dead letter consumer configured to be triggered to read the target data from the dead letter queue, wherein the writing unit 307 is further configured to write the target data read from the dead letter queue to a database. Therefore, data loss can be effectively avoided, and data integrity is fully protected.
Fig. 4 shows a schematic diagram of a data processing procedure according to an exemplary embodiment of the present disclosure, wherein arrows show the direction of signal flow. According to the operation request and/or the flow direction of the target data between the modules shown in fig. 4, the data processing process may include the steps of step S11, in response to receiving a write request of the target data, storing the target data in the buffer 301 synchronously, storing the target data in a target queue in the message queue 303 by the producer 305, where the message queue 303 includes a target queue, a delay queue and a dead message queue, step S12, automatically reading the target data from the target queue by the target consumer, writing the target data into the database 302, and step S13, writing the target data into the database. The method comprises the steps of storing target data in a database, responding to the fact that the target data is stored in the database and is unsuccessful, responding to the fact that the target data is sent to a delay queue, responding to the fact that a first preset condition is met, wherein the first preset condition can be that a time interval between the current time and the starting time of the target data sent to the delay queue reaches a preset duration, the target data in the delay queue is sent to the target queue again, waiting to be read again by a target consumer to write the target data into the database 302, responding to the fact that the target data is not written into the database for many times, and sending the target data to the dead mail queue in response to the fact that the accumulated number of times of the target data sent to the delay queue reaches a threshold value, responding to the fact that the target data is sent to the dead mail queue, responding to the fact that a second preset condition is met, and is triggered manually, and the target data is read from the dead mail queue and written into the database 302, and responding to the fact that the target data is stored in a message queue is unsuccessful, and directly written into the database. Therefore, as long as the data is successfully written into at least any one of the cache, the message queue and the database, the normal execution of the data processing can be ensured, and the normal execution of the data processing can be ensured, thereby improving the performance and the stability of the data processing system.
The data processing process further includes step S19, in response to receiving the query request, reading the target data from the cache 301 and the database 302, comparing the target data in the cache 301 and the database 302, and taking the target data in the latest data state as the query result.
According to another aspect of the present disclosure, there is also provided an electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the data processing method described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the above-described data processing method.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the above-mentioned data processing method.
Referring to fig. 5, a block diagram of a structure of an electronic device 500 that may be used as the present disclosure will now be described, which is an example of a hardware device that may be applied to aspects of the present disclosure. The electronic devices may be different types of computer devices, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
Fig. 5 shows a block diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 5, the electronic device 500 may include at least one processor 501, a working memory 502, I/O devices 504, a display device 505, a storage 506, and a communication interface 507 capable of communicating with each other over a system bus 503.
The processor 501 may be a single processing unit or multiple processing units, all of which may include a single or multiple computing units or multiple cores. The processor 501 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. The processor 501 may be configured to obtain and execute computer readable instructions stored in the working memory 502, the storage 506, or other computer readable media, such as program code of the operating system 502a, program code of the application 502b, and the like.
Working memory 502 and storage 506 are examples of computer-readable storage media for storing instructions that are executed by processor 501 to implement the various functions previously described. Working memory 502 may include both volatile memory and nonvolatile memory (e.g., RAM, ROM, etc.). In addition, storage 506 may include hard drives, solid state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), storage arrays, network attached storage, storage area networks, and the like. Working memory 502 and storage 506 may both be referred to herein collectively as memory or computer-readable storage medium, and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed by processor 501 as a particular machine configured to implement the operations and functions described in the examples herein.
I/O devices 504 may include input devices, which may be any type of device capable of inputting information to electronic device 500, and/or output devices, which may include, but are not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output device may be any type of device capable of presenting information and may include, but is not limited to including, a video/audio output terminal, a vibrator, and/or a printer.
The communication interface 507 allows the electronic device 500 to exchange information/data with other devices through computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The application 502b in the working register 502 may be loaded to perform the various methods and processes described above, such as steps S101-S104 in fig. 1. In some embodiments, some or all of the computer program may be loaded and/or installed onto electronic device 500 via storage 506 and/or communication interface 507. One or more of the steps of the data processing method described above may be performed when the computer program is loaded and executed by the processor 501.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), and the Internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.

Claims (20)

1. A data processing method, comprising:
Storing target data in a cache and storing the target data in a message queue in response to receiving a write request of the target data, wherein the target data is order data comprising a data state, and the data state comprises an order state;
Reading the target data from the message queue;
Writing the target data read from the message queue into a database, and
In response to receiving a query request, reading the target data from the cache and/or database,
Wherein the method further comprises:
Acquiring the latest order state related to the target data in the cache and/or the database before the target data is written into the database,
And wherein writing the target data to a database is performed in response to the latest order state being a pre-state of a target order state of the target data.
2. The method of claim 1, wherein the message queue comprises a delay queue and a target queue having respective target consumers, the target data being stored to the target queue, the target consumers being configured to automatically read the target data from the target queue,
And wherein the message queue storing the target data comprises:
Transmitting the target data to the delay queue in response to a failure to write the target data read from the target queue to a database, and
And in response to the first preset condition being met, the target data in the delay queue are sent to the target queue again.
3. The method of claim 2, wherein the message queue comprises a dead letter queue, data in the dead letter queue not being automatically consumed,
And wherein the message queue storing the target data comprises:
And transmitting the target data to the dead message queue in response to the accumulated number of times the target data is transmitted from the target queue to a delay queue reaching a threshold.
4. The method of claim 3, wherein reading the target data from the message queue comprises:
triggering a dead letter consumer to read the target data from the dead letter queue in response to meeting a second preset condition,
Wherein writing the target data read from the message queue to a database comprises:
and writing the target data read from the dead letter queue into a database.
5. The method of claim 4, wherein the second preset condition comprises the database being a writable data state.
6. The method of claim 2, wherein the message queue storing the target data comprises:
In response to each failure of the target data to be written into the database, recording a corresponding start time at which the target data is sent to the delay queue,
The first preset condition comprises that the time interval between the current time and the starting time reaches a preset duration.
7. The method of any of claims 1-6, further comprising:
and in response to failure of storing the target data in the message queue, directly writing the target data into the database.
8. The method of any of claims 1-6, wherein, in response to receiving a query request, reading the target data from the cache and/or database comprises:
and in response to the failure of storing the target data in the cache, reading the target data from the database.
9. The method of any of claims 1-6, wherein, in response to receiving a query request, reading the target data from the cache and/or database comprises:
And reading the target data from the cache in response to the target data not being read from the database.
10. The method of any of claims 1-6, wherein the cache comprises a plurality of sub-caches, and wherein storing the target data to the cache comprises:
and storing the target data into each sub-cache.
11. The method of any of claims 1-6, wherein, in response to receiving a query request, reading the target data from the cache and/or database comprises:
And in response to the target data corresponding to the query request being read from both the cache and the database, determining target data with the latest data state in the target data read from both the cache and the database as a query result based on the data state of the target data.
12. The method of any of claims 2-6, further comprising:
and transmitting the target data to the delay queue in response to the latest order state not being a pre-state of the target order state.
13. The method of claim 1, wherein, in response to receiving a query request, reading the target data from the cache and/or database comprises:
in response to the order data corresponding to the query request being read from both the cache and the database, order data having the latest order status among the order data read from both the cache and the database is determined as a query result based on the order status of the order data.
14. A data processing apparatus comprising:
a cache configured to store data;
a database configured to store data;
a message queue configured to store data;
a storage unit configured to store target data to the cache in response to receiving a write request of the target data;
A producer configured to store target data to a message queue in response to receiving a write request for the target data, wherein the target data is order data comprising a data state, the data state comprising an order state;
a consumer configured to read the target data from the message queue;
A writing unit configured to write the target data read from the message queue into a database, and
A query unit configured to read the target data from the cache and/or database in response to receiving a query request,
Wherein the writing unit is further configured to:
Acquiring the latest order state related to the target data in the cache and/or the database before the target data is written into the database,
And wherein writing the target data to a database is performed in response to the latest order state being a pre-state of a target order state of the target data.
15. The apparatus of claim 14, wherein the consumer comprises a target consumer, the message queue comprises a delay queue and a target queue with a corresponding target consumer, the producer is configured to store the target data to the target queue, the target consumer is configured to automatically read the target data from the target queue, the write unit is configured to write the target data read from the target queue to a database,
And wherein the message queue further comprises:
a first transmitting unit configured to transmit the target data to the delay queue in response to a failure of writing the target data into the database, and
And a second transmitting unit configured to retransmit the target data in the delay queue to the target queue in response to satisfaction of a first preset condition.
16. The apparatus of claim 15, wherein the message queue comprises a dead letter queue, data in the dead letter queue not being automatically consumed, and wherein the message queue further comprises:
And a third transmitting unit configured to transmit the target data to the dead letter queue in response to the cumulative number of times the target data is transmitted from the target queue to the delay queue reaching a threshold.
17. The apparatus of claim 16, wherein the consumer further comprises:
A dead letter consumer configured to, in response to a second preset condition being met, be triggered to read the target data from the dead letter queue,
Wherein the writing unit is further configured to write the target data read from the dead letter queue into a database.
18. An electronic device, comprising:
At least one processor, and
A memory communicatively coupled to the at least one processor, wherein
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-13.
19. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-13.
20. A computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the method of any of claims 1-13.
CN202111485148.5A 2021-12-07 2021-12-07 Data processing method, device, equipment and medium Active CN114138838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111485148.5A CN114138838B (en) 2021-12-07 2021-12-07 Data processing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111485148.5A CN114138838B (en) 2021-12-07 2021-12-07 Data processing method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN114138838A CN114138838A (en) 2022-03-04
CN114138838B true CN114138838B (en) 2025-02-07

Family

ID=80384717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111485148.5A Active CN114138838B (en) 2021-12-07 2021-12-07 Data processing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN114138838B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115473937A (en) * 2022-09-14 2022-12-13 平安科技(深圳)有限公司 Message pushing method and device
CN115629878B (en) * 2022-10-20 2024-07-19 北京力控元通科技有限公司 Data processing method and system based on memory exchange
CN115577397B (en) * 2022-12-08 2023-03-10 无锡沐创集成电路设计有限公司 Data processing method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110633320A (en) * 2018-05-30 2019-12-31 北京京东尚科信息技术有限公司 Processing method, system, equipment and storage medium of distributed data service
CN110968439A (en) * 2019-11-28 2020-04-07 蜂助手股份有限公司 Intersystem message notification method, device, server, system and storage medium
CN112860750A (en) * 2021-03-11 2021-05-28 广州市网星信息技术有限公司 Data processing method and device, electronic equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5930768A (en) * 1996-02-06 1999-07-27 Supersonic Boom, Inc. Method and system for remote user controlled manufacturing
CN105956166B (en) * 2016-05-19 2020-02-07 北京京东尚科信息技术有限公司 Database reading and writing method and device
CN106156278B (en) * 2016-06-24 2020-07-17 安徽三禾一信息科技有限公司 Database data reading and writing method and device
CN107944985A (en) * 2017-12-27 2018-04-20 掌合天下(北京)信息技术有限公司 Order retransmission method, device, server and readable storage medium storing program for executing
CN109669791A (en) * 2018-12-22 2019-04-23 网宿科技股份有限公司 Exchange method, server and computer readable storage medium
CN111652691A (en) * 2020-06-09 2020-09-11 北京字节跳动网络技术有限公司 Order information processing method and device and electronic equipment
CN112380227B (en) * 2020-11-12 2024-05-07 平安科技(深圳)有限公司 Data synchronization method, device, equipment and storage medium based on message queue
CN112559611A (en) * 2020-12-15 2021-03-26 中国人寿保险股份有限公司 Data processing method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110633320A (en) * 2018-05-30 2019-12-31 北京京东尚科信息技术有限公司 Processing method, system, equipment and storage medium of distributed data service
CN110968439A (en) * 2019-11-28 2020-04-07 蜂助手股份有限公司 Intersystem message notification method, device, server, system and storage medium
CN112860750A (en) * 2021-03-11 2021-05-28 广州市网星信息技术有限公司 Data processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114138838A (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN114138838B (en) Data processing method, device, equipment and medium
US10884837B2 (en) Predicting, diagnosing, and recovering from application failures based on resource access patterns
WO2022063284A1 (en) Data synchronization method and apparatus, device, and computer-readable medium
CN110908838B (en) Data processing method and device, electronic equipment and storage medium
US20170083419A1 (en) Data management method, node, and system for database cluster
CN115934389A (en) System and method for error reporting and handling
CN110069217B (en) Data storage method and device
CN109034668B (en) ETL task scheduling method, ETL task scheduling device, computer equipment and storage medium
CN113641640B (en) Data processing method, device, equipment and medium for stream type computing system
CN111694684A (en) Abnormal construction method and device of storage equipment, electronic equipment and storage medium
CN113722389A (en) Data management method and device, electronic equipment and computer readable storage medium
CN111078418B (en) Operation synchronization method, device, electronic equipment and computer readable storage medium
CN110674167B (en) Database operation method and device, computer equipment and storage medium
CN114546705B (en) Operation response method, operation response device, electronic apparatus, and storage medium
US11126514B2 (en) Information processing apparatus, information processing system, and recording medium recording program
CN115510036A (en) Data migration method, device, equipment and storage medium
CN116680055A (en) Asynchronous task processing method and device, computer equipment and storage medium
US9535806B1 (en) User-defined storage system failure detection and failover management
CN112527540B (en) Method and device for realizing automatic degradation
CN115629918B (en) Data processing method, device, electronic equipment and storage medium
CN113836114B (en) Data migration method, system, equipment and storage medium
CN114879916B (en) Method and device for managing storage unit
CN117112311B (en) I/O driven data recovery method, system and device
US11836355B2 (en) Systems and methods for resetting a degraded storage resource
US11941432B2 (en) Processing system, processing method, higher-level system, lower-level system, higher-level program, and lower-level program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20220304

Assignee: Baisheng Consultation (Shanghai) Co.,Ltd.

Assignor: Shengdoushi (Shanghai) Technology Development Co.,Ltd.

Contract record no.: X2023310000138

Denomination of invention: Data processing methods, devices, equipment, and media

License type: Common License

Record date: 20230714

GR01 Patent grant