Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, the use of the terms "first," "second," and the like to describe various elements is not intended to limit the positional relationship, timing relationship, or importance relationship of the elements, unless otherwise indicated, and such terms are merely used to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, they may also refer to different instances based on the description of the context.
The terminology used in the description of the various illustrated examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, the elements may be one or more if the number of the elements is not specifically limited. Furthermore, the term "and/or" as used in this disclosure encompasses any and all possible combinations of the listed items.
In the related art, the data processing is implemented by ensuring that the database is highly available, and the data is synchronized to the database in the whole data processing flow. The disadvantage of this implementation is that when the data traffic is large, frequent read and write operations will degrade the performance of the database, and when the database is not available, the entire data processing flow will be blocked, while the maintenance cost of the database is increased.
In order to solve the above-mentioned problems, the present disclosure provides a data processing method, which asynchronously writes target data into a database by synchronously writing the target data into a cache and introducing a message queue to decouple a writing operation of the database. According to the method, the target data is synchronously written into the cache and the database writing operation of the message queue decoupling database is introduced, so that the pressure of peak data flow to the database can be reduced, the data processing performance is improved, meanwhile, the dependence of a data processing system on each component is reduced, the data processing can be normally executed when any one component in the cache, the message queue and the database is unavailable, and the stability of the system is improved.
The data processing method disclosed by the invention can be applied to an order processing system, for example, but is not limited to the application, the whole order state circulation and the data are synchronously written into a cache by adopting the method in the embodiment of the invention, and the database is asynchronously written into the database by a message queue, so that the pressure of the database can be relieved, the dependence of the order process on each component is relieved, the order can be successfully ordered even if any component in the cache, the message queue and the database is unavailable in the order peak flow period, and the stability of the order processing system is improved.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 shows a flowchart of a data processing method according to an exemplary embodiment of the present disclosure. As shown in fig. 1, the data processing method includes a step S101 of storing target data in a buffer memory and storing the target data in a message queue in response to receiving a write request of the target data, a step S102 of reading the target data from the message queue, a step S103 of writing the target data read from the message queue into a database, and a step S104 of reading the target data from the buffer memory and/or the database in response to receiving a query request.
Therefore, the target data is synchronously written into the cache, and the writing operation of the message queue decoupling database is introduced, so that the target data is asynchronously written into the database, the dependence of a data processing system on each component can be reduced, and when the data flow is large, the normal execution of data processing can be ensured as long as the data is successfully written into at least any one of the cache, the message queue and the database, so that the pressure of the database can be reduced, and the data processing performance is improved.
In one example, the target data may be read from the cache and/or the database by comparing the target data in the cache with the target data in the database, and using the target data in the latest state as the query result. And when the target data cannot be read from the database, taking the target data in the cache as a query result. Therefore, the query of target data can be realized, the pressure of large data flow to a database is reduced, and the data processing performance is improved.
The cache may be, for example, a redis clustered distributed cache, but is not limited thereto, and may also be, for example, mongodb, couchDB or other non-relational databases. The message queue may be rabbitMq message queues, and the database may be MySQL relational database, but is not limited thereto, so long as the functions described above can be implemented, and the specific implementation forms of the cache, the message queue, and the database are not limited.
According to some embodiments, the message queue includes a delay queue and a target queue with respective target consumers to which the target data is stored, the target consumers being configured to automatically read the target data from the target queue. In this case, the message queue storing the target data may include transmitting the target data to the delay queue in response to a failure of writing the target data read from the target queue to a database, and retransmitting the target data in the delay queue to the target queue in response to a first preset condition being satisfied.
Therefore, when the target data read from the target queue automatically fails to be written into the database, the target data is written into the delay queue first to wait for the retrying of the database, so that the target data can be prevented from being lost, the data integrity is protected, and the requirements of actual application scenes can be fully met.
In some embodiments, the target data in the delay queue may be resent to the target queue for a period of time each interval waiting to be automatically consumed by the target consumer to retry writing to the library.
Accordingly, according to some embodiments, the message queue storing the target data includes recording a corresponding start time for the target data to be sent to the delay queue in response to each time the target data fails to be written into the database, wherein the first preset condition includes that a time interval between a current time and the start time reaches a preset duration.
The first preset condition is equivalent to that the waiting time of the target data in the delay queue reaches a preset threshold value, and the first preset condition can be flexibly adjusted according to the requirements of actual application scenes, so that the data processing performance is improved. For example, a threshold value of the number of times of data retry writing to the database may be set, so that the wireless loop retry writing of target data to the database can be avoided, and the pressure of the database is increased.
Based on this, according to some embodiments, the message queue includes a dead letter queue, data in the dead letter queue is not automatically consumed, and wherein the message queue storing the target data includes sending the target data to the dead letter queue in response to a cumulative number of times the target data was sent from the target queue to a delay queue reaching a threshold. Therefore, the target data can be temporarily stored in the dead signal queue when the target data fails to be written into the database for many times, the pressure of the target data to the database during the peak period of the data flow is relieved while the loss of the target data is avoided, and the data processing performance is improved.
Further, according to some embodiments, the reading the target data from the message queue in step S102 includes triggering a dead letter consumer to read the target data from the dead letter queue in response to a second preset condition being met, and the writing the target data read from the message queue into a database in step S103 includes writing the target data read from the dead letter queue into a database. Therefore, data loss can be effectively avoided, and data integrity is fully protected.
According to some embodiments, the second preset condition includes the database being a writable data state. The writable data state can be an unavailable fault state and a recovery state of the corresponding database under an extreme scene, so that data loss can be effectively avoided, and data integrity is fully protected.
It can be appreciated that the second preset condition is not limited to this, for example, the second preset condition may include that the target data flow is smaller than a certain threshold, so that the data processing operation during the peak period and the valley period of the data flow can be balanced, the hardware performance of the database read-write is fully utilized, and the performance of the data processing is improved.
For example, in response to a second preset condition being met, a dead letter consumer may be manually triggered to read the target data from the dead letter queue.
According to some embodiments, the data processing method further comprises, in response to a failure of the target data to be stored in the message queue, directly writing the target data to the database. Therefore, when the message queue is unavailable, the target data can be synchronously written into the database, the dependence of the data processing system on the message queue is reduced, and the stability of the system is improved.
According to some embodiments, the reading the target data from the cache and/or database in response to receiving a query request includes reading the target data from the database in response to a failure of the target data to be stored in the cache. Therefore, the database can be directly penetrated and queried when the cache is not available, the dependence of the data processing system on the cache is reduced, and the stability of the system is improved.
According to some embodiments, the reading the target data from the cache and/or database in response to receiving a query request includes reading the target data from the cache in response to not reading the target data from the database. Therefore, the query can be realized by utilizing the cache when the database is unavailable, the dependence of the data processing system on the database is reduced, and the stability of the system is improved. Meanwhile, multiple attempts to query the database can be avoided, and the pressure on the database is reduced.
According to some embodiments, the cache includes a plurality of sub-caches, and storing the target data to the cache in step S101 includes storing the target data to each sub-cache. Therefore, when any sub-cache is not available, data can be queried from other sub-caches, particularly when a database is not available, the influence on the data query performance can be avoided, the dependence on the sub-caches is further reduced, and the performance and the stability of the system can be improved.
According to some embodiments, the target data includes a data state, and in response to receiving a query request, reading the target data from the cache and/or database in step S104 includes determining, as a query result, target data having a latest data state among target data read from the cache and database based on the data state of the target data in response to reading the target data corresponding to the query request from both the cache and database. Therefore, the target data with the latest data state can be used as the query result, and the accuracy of the query result is ensured.
According to some embodiments, when the target data is order data, the data processing system may be an order processing system, and processing of the order data is implemented accordingly. The pressure on the order database is relieved in the consumption peak period, so that the requirements of actual application scenes can be fully met. For example, the data state of the order data may include an order state, such as a to-pay state, a paid state, a cancelled order state, and so forth.
According to some embodiments, where the target data is order data, the target data includes a target order state, and the method further comprises, prior to writing the target data to a database, obtaining a latest order state in the cache and/or database associated with the target data, wherein writing the target data to the database is performed in response to the latest order state being a pre-state of the target order state. Therefore, the correctness of the target data written into the database can be ensured by checking the preset order state circulation conditions, the data with incorrect order states is prevented from being written into the database, and the correctness and the integrity of the data are ensured, so that the requirements of actual application scenes can be fully met.
According to some embodiments, in the case where the target data is order data, the target data includes a target order state, and the message queue includes a delay queue, the data processing method further includes sending the target data to the delay queue in response to the most recent order state not being a pre-state of the target order state. Therefore, the data with incorrect order state is temporarily stored in the delay queue to wait for retrying the writing library, so that the correctness and the integrity of the data can be ensured, and the requirements of actual application scenes can be fully met.
In one example, the target order status may be used to represent a corresponding payment status for the order, may be a to-be-paid status, a cancelled order status, or the like. The writing of the target data into the database is performed in response to the latest order state being a pre-state of the target order state, and may be performed based on a preset order state circulation condition. For example, when the target order state is a paid state, the writing operation on the target data can be performed only when the latest order state related to the target data in the cache and/or the database is a state to be paid, and when the latest order state is an order cancel state, the corresponding writing operation cannot be performed. Therefore, the correctness of the target data written into the database can be ensured by checking the preset order state circulation conditions, and the requirements of actual application scenes are fully met.
According to some embodiments, in the case where the target data is order data, the data state includes an order state, and in response to receiving a query request, the reading the target data from the cache and/or database in step S104 includes determining, as a query result, order data having a latest order state among the order data read from the cache and database, based on the order state of the order data, in response to reading the order data corresponding to the query request from both the cache and database. Therefore, the order data with the latest order state can be used as the query result, and the accuracy of the order query result is ensured.
Fig. 2 shows a flowchart of a data processing method according to an exemplary embodiment of the present disclosure.
As shown in fig. 2, the data processing method includes:
Step S201, in response to receiving a write request of target data, storing the target data in a cache, and storing the target data in a target queue in a message queue, wherein the message queue further comprises a delay queue and a dead message queue;
Step S202, the target consumer automatically reads the target data from the target queue;
writing the target data read from the target queue into a database, and judging whether the target data is successfully stored in the database;
And ending in response to successful storage of the target data in the database.
In response to unsuccessful storage of the target data into a database, determining whether a number of times the target data is sent from a target queue to a delay queue reaches a threshold;
In response to the number of times the target data is sent from the target queue to the delay queue not reaching a threshold, executing step S203 to send the target data to the delay queue;
In response to the first preset condition being met (for example, the duration of time after the target data is sent to the delay queue reaches a preset value), step S204 is executed to send the target data in the delay queue to the target queue again, so as to wait for being automatically read again by the target consumer;
In response to the number of times the target data is sent from the target queue to the delay queue reaching a threshold, performing step S205 to send the target data to the dead queue;
in response to the second preset condition being met (e.g., database recovery writable), step S206 is performed to manually trigger the dead letter consumer to read the target data from the dead letter queue, and then step S207 is performed to write the target data read from the dead letter queue to the database.
According to another aspect of the present disclosure, there is also provided a data processing apparatus. Fig. 3 shows a block diagram of a data processing apparatus according to an exemplary embodiment of the present disclosure, as shown in fig. 3, the data processing apparatus 300 includes a buffer 301 configured to store data, a database 302 configured to store data, a message queue 303 configured to store data, a storage unit 304 configured to store target data to the buffer 301 in response to receiving a write request of the target data, a producer 305 configured to store the target data to the message queue 303 in response to receiving the write request of the target data, a consumer 306 configured to read the target data from the message queue 303, a writing unit 307 configured to write the target data read from the message queue 303 to the database 302, and a query unit 308 configured to read the target data from the buffer 301 and/or the database 302 in response to receiving the query request. Therefore, the target data is synchronously written into the cache, and the message queue is introduced to decouple the writing operation of the database, so that the target data is asynchronously written into the database, the dependence of the data processing system on each component can be reduced, and when the data flow is large, the normal execution of data processing can be ensured as long as the data is successfully written into at least any one of the cache, the message queue and the database, thereby reducing the pressure of the database and improving the data processing performance.
According to some embodiments, the consumers 306 comprise target consumers, the message queue 303 comprises a delay queue and a target queue with corresponding target consumers, the producer 305 is configured to store the target data to the target queue, the target consumers are configured to automatically read the target data from the target queue, the writing unit is configured to write the target data read from the target queue to a database, and wherein the message queue 303 further comprises a first sending unit configured to send the target data to the delay queue in response to a failure of writing the target data to the database, and a second sending unit configured to send the target data in the delay queue to the target queue again in response to a first preset condition being met. The specific content of the first preset condition is consistent with that described above, and will not be described herein. Therefore, when the target data fails to be written into the database, the target data can be re-tried to be written into the database, the target data is prevented from being lost, the data integrity is protected, and the requirements of actual application scenes can be fully met.
According to some embodiments, the message queue 303 comprises a dead letter queue, data in the dead letter queue is not automatically consumed, and wherein the message queue 303 further comprises a third sending unit configured to send the target data to the dead letter queue in response to the cumulative number of times the target data is sent from the target queue to a delay queue reaching a threshold. Therefore, the target data can be temporarily stored when the target data fails to be written into the database for many times, the pressure on the database during the peak period of the data flow is relieved while the loss of the target data is avoided, and the data processing performance is improved.
According to some embodiments, the consumer 306 further comprises a dead letter consumer configured to be triggered to read the target data from the dead letter queue, wherein the writing unit 307 is further configured to write the target data read from the dead letter queue to a database. Therefore, data loss can be effectively avoided, and data integrity is fully protected.
Fig. 4 shows a schematic diagram of a data processing procedure according to an exemplary embodiment of the present disclosure, wherein arrows show the direction of signal flow. According to the operation request and/or the flow direction of the target data between the modules shown in fig. 4, the data processing process may include the steps of step S11, in response to receiving a write request of the target data, storing the target data in the buffer 301 synchronously, storing the target data in a target queue in the message queue 303 by the producer 305, where the message queue 303 includes a target queue, a delay queue and a dead message queue, step S12, automatically reading the target data from the target queue by the target consumer, writing the target data into the database 302, and step S13, writing the target data into the database. The method comprises the steps of storing target data in a database, responding to the fact that the target data is stored in the database and is unsuccessful, responding to the fact that the target data is sent to a delay queue, responding to the fact that a first preset condition is met, wherein the first preset condition can be that a time interval between the current time and the starting time of the target data sent to the delay queue reaches a preset duration, the target data in the delay queue is sent to the target queue again, waiting to be read again by a target consumer to write the target data into the database 302, responding to the fact that the target data is not written into the database for many times, and sending the target data to the dead mail queue in response to the fact that the accumulated number of times of the target data sent to the delay queue reaches a threshold value, responding to the fact that the target data is sent to the dead mail queue, responding to the fact that a second preset condition is met, and is triggered manually, and the target data is read from the dead mail queue and written into the database 302, and responding to the fact that the target data is stored in a message queue is unsuccessful, and directly written into the database. Therefore, as long as the data is successfully written into at least any one of the cache, the message queue and the database, the normal execution of the data processing can be ensured, and the normal execution of the data processing can be ensured, thereby improving the performance and the stability of the data processing system.
The data processing process further includes step S19, in response to receiving the query request, reading the target data from the cache 301 and the database 302, comparing the target data in the cache 301 and the database 302, and taking the target data in the latest data state as the query result.
According to another aspect of the present disclosure, there is also provided an electronic device comprising at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the data processing method described above.
According to another aspect of the present disclosure, there is also provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the above-described data processing method.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program, when executed by a processor, implements the above-mentioned data processing method.
Referring to fig. 5, a block diagram of a structure of an electronic device 500 that may be used as the present disclosure will now be described, which is an example of a hardware device that may be applied to aspects of the present disclosure. The electronic devices may be different types of computer devices, such as laptop computers, desktop computers, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
Fig. 5 shows a block diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 5, the electronic device 500 may include at least one processor 501, a working memory 502, I/O devices 504, a display device 505, a storage 506, and a communication interface 507 capable of communicating with each other over a system bus 503.
The processor 501 may be a single processing unit or multiple processing units, all of which may include a single or multiple computing units or multiple cores. The processor 501 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. The processor 501 may be configured to obtain and execute computer readable instructions stored in the working memory 502, the storage 506, or other computer readable media, such as program code of the operating system 502a, program code of the application 502b, and the like.
Working memory 502 and storage 506 are examples of computer-readable storage media for storing instructions that are executed by processor 501 to implement the various functions previously described. Working memory 502 may include both volatile memory and nonvolatile memory (e.g., RAM, ROM, etc.). In addition, storage 506 may include hard drives, solid state drives, removable media, including external and removable drives, memory cards, flash memory, floppy disks, optical disks (e.g., CD, DVD), storage arrays, network attached storage, storage area networks, and the like. Working memory 502 and storage 506 may both be referred to herein collectively as memory or computer-readable storage medium, and may be non-transitory media capable of storing computer-readable, processor-executable program instructions as computer program code that may be executed by processor 501 as a particular machine configured to implement the operations and functions described in the examples herein.
I/O devices 504 may include input devices, which may be any type of device capable of inputting information to electronic device 500, and/or output devices, which may include, but are not limited to, a mouse, a keyboard, a touch screen, a trackpad, a trackball, a joystick, a microphone, and/or a remote control. The output device may be any type of device capable of presenting information and may include, but is not limited to including, a video/audio output terminal, a vibrator, and/or a printer.
The communication interface 507 allows the electronic device 500 to exchange information/data with other devices through computer networks, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 802.11 devices, wiFi devices, wiMax devices, cellular communication devices, and/or the like.
The application 502b in the working register 502 may be loaded to perform the various methods and processes described above, such as steps S101-S104 in fig. 1. In some embodiments, some or all of the computer program may be loaded and/or installed onto electronic device 500 via storage 506 and/or communication interface 507. One or more of the steps of the data processing method described above may be performed when the computer program is loaded and executed by the processor 501.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), and the Internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the foregoing methods, systems, and apparatus are merely exemplary embodiments or examples, and that the scope of the present invention is not limited by these embodiments or examples but only by the claims following the grant and their equivalents. Various elements of the embodiments or examples may be omitted or replaced with equivalent elements thereof. Furthermore, the steps may be performed in a different order than described in the present disclosure. Further, various elements of the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced by equivalent elements that appear after the disclosure.