CN113553153A - Service data processing method and device and micro-service architecture system - Google Patents
Service data processing method and device and micro-service architecture system Download PDFInfo
- Publication number
- CN113553153A CN113553153A CN202110823258.1A CN202110823258A CN113553153A CN 113553153 A CN113553153 A CN 113553153A CN 202110823258 A CN202110823258 A CN 202110823258A CN 113553153 A CN113553153 A CN 113553153A
- Authority
- CN
- China
- Prior art keywords
- message
- service
- calculation result
- shared memory
- thread
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000004364 calculation method Methods 0.000 claims abstract description 124
- 238000000034 method Methods 0.000 claims description 29
- 238000005111 flow chemistry technique Methods 0.000 claims description 14
- 238000004458 analytical method Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 9
- 230000001360 synchronised effect Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 239000007795 chemical reaction product Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/544—Buffers; Shared memory; Pipes
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer And Data Communications (AREA)
Abstract
The present specification relates to the technical field of micro services, and in particular discloses a service data processing method, a device and a micro service architecture system, wherein the micro service architecture system comprises a main processing thread module, a shared memory, a pull thread module, a distributed publish-subscribe message system and a stream processing system; the main processing thread processing module receives the service message, analyzes the service message to obtain a service data stream and sends the service data stream to a message receiving queue of the distributed publish-subscribe message system; the main processing thread module also obtains a calculation result from the shared memory and returns the calculation result to the caller; the stream processing system acquires the service data stream from the message receiving queue and performs logic calculation on the service data stream to obtain a calculation result and writes the calculation result into a message return queue of the distributed publish-subscribe message system; and the pull thread module acquires the calculation result from the message return queue and stores the calculation result in the shared memory. The scheme can improve the system throughput and the utilization rate of server resources.
Description
Technical Field
The present disclosure relates to the field of micro service technologies, and in particular, to a method and an apparatus for processing service data, and a micro service architecture system.
Background
For a complex business application, in order to make the system code easier to maintain, the business is often required to be broken into a plurality of micro-services, and each micro-service is responsible for processing a part of things. For example, in a typical e-commerce business, there are often several such microservices, one to manage inventory, one to manage payment, another to manage logistics, etc. In fact, the processing procedure of the service is often a procedure for stringing the micro-services and calling each other.
The calling between these microservices is conventionally realized by synchronous calling. The synchronous calling has the problem of thread waiting, if high throughput is supported, a plurality of threads are needed in the system, the scheduling overhead of the system is increased, real-time performance is influenced, the experience of a user is influenced, and therefore, the utilization rate of an online machine is very low and the system is unstable in operation.
In view of the above problems, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the specification provides a service data processing method, a service data processing device and a micro-service architecture system, and aims to solve the problems that in the prior art, the micro-service invocation has poor real-time performance and the utilization rate of a server is low.
An embodiment of the present specification provides a micro service architecture system, including: the system comprises a main processing thread module, a shared memory, a pulling thread module, a distributed publishing and subscribing message system and a stream processing system; the main processing thread processing module is used for receiving a service message sent by a calling party, analyzing the service message to obtain a service data stream, and sending the service data stream to a message receiving queue of the distributed publish-subscribe message system; the main processing thread module is further configured to obtain a calculation result corresponding to the service packet from the shared memory, and return the calculation result to the caller; the stream processing system is used for acquiring the service data stream from the message receiving queue, performing logic calculation on the service data stream to obtain a calculation result corresponding to the service message, and writing the calculation result into a message return queue of the distributed publish-subscribe message system; and the pull thread module is used for acquiring the calculation result from the message return queue and storing the calculation result into the shared memory.
In one embodiment, the microservice architecture system further comprises a distributed storage system, the distributed storage system is used for storing a service data table, and the stream processing system is used for accessing the service data table in the distributed storage system to perform logic calculation on the service data stream.
In one embodiment, the main processing thread module is further configured to generate a thread lock corresponding to the service packet, and latch the thread into the shared memory; the pull thread module is also used for unlocking a thread lock corresponding to the service message under the condition that the calculation result of the service message is stored in the shared memory; the main processing thread module is further configured to read the calculation result from the shared memory and return the calculation result to the caller when the thread lock is unlocked.
In one embodiment, the main processing thread module and/or the pull thread module are multi-concurrent.
An embodiment of the present specification provides a service data processing method based on a micro-service architecture system, where the micro-service architecture system includes a main processing thread module, a shared memory, a pull thread module, a distributed publish-subscribe message system, and a stream processing system, and the method includes: the main processing thread module receives a service message sent by a calling party, analyzes the service message to obtain a service data stream, and sends the service data stream to a message receiving queue of the distributed publish-subscribe message system; the stream processing system acquires the service data stream from the message receiving queue, performs logic calculation on the service data stream to obtain a calculation result, and writes the calculation result into a message return queue of the distributed publish-subscribe message system; the pull thread module obtains the calculation result from the message return queue and stores the calculation result into the shared memory; and the main processing thread module acquires the calculation result from the shared memory and returns the calculation result to the caller.
In one embodiment, after sending the service data stream to a message receiving queue of a distributed publish-subscribe message system, the method further includes: generating a thread lock corresponding to the service message, and storing the thread lock into the shared memory; correspondingly, reading the calculation result corresponding to the service packet from the shared memory includes: determining whether a thread lock in the shared memory is in an unlocked state; and under the condition that the thread lock in the shared memory is determined to be in an unlocked state, reading a calculation result corresponding to the service message from the shared memory, wherein under the condition that the calculation result is stored in the shared memory, the thread lock corresponding to the service message in the shared memory is unlocked.
An embodiment of the present specification provides a method for processing service data based on a micro-service architecture system, where the micro-service architecture system includes a main processing thread module, a shared memory, a pull thread module, a distributed publish-subscribe message system, and a stream processing system, and the method is applied to the main processing thread module and includes: receiving a service message sent by a calling party; analyzing the service message to obtain a service data stream, and sending the service data stream to a message receiving queue of the distributed publish-subscribe message system; reading a calculation result corresponding to the service message from the shared memory, and returning the calculation result to the calling party; the flow processing system obtains the service data flow from the message receiving queue and performs logic calculation on the service data flow, and the flow processing system stores the calculation result to a message return queue of the distributed publish-subscribe message system, and the pull thread module obtains the calculation result from the message return queue and stores the calculation result to the shared memory.
An embodiment of the present specification provides a service data processing apparatus based on a micro-service architecture system, where the micro-service architecture system includes a main processing thread module, a shared memory, a pull thread module, a distributed publish-subscribe message system, and a stream processing system, and the apparatus is applied to the main processing thread module and includes: the receiving module is used for receiving the service message sent by the calling party; the analysis module is used for analyzing the service message to obtain a service data stream and sending the service data stream to a message receiving queue of the distributed publish-subscribe message system; a return module, configured to read a calculation result corresponding to the service packet from the shared memory, and return the calculation result to the caller; the flow processing system obtains the service data flow from the message receiving queue and performs logic calculation on the service data flow, and the flow processing system stores the calculation result to a message return queue of the distributed publish-subscribe message system, and the pull thread module obtains the calculation result from the message return queue and stores the calculation result to the shared memory.
An embodiment of the present specification further provides a computer device, which includes a processor and a memory for storing processor-executable instructions, where the processor executes the instructions to implement the steps of the method for processing service data based on the micro service architecture system described in any of the above embodiments.
The present specification further provides a computer readable storage medium, on which computer instructions are stored, and when the instructions are executed, the steps of the method for processing business data based on a micro service architecture system described in any of the foregoing embodiments are implemented.
In the embodiment of the present specification, a micro-service-based architecture system is provided, which may include a main processing thread module, a shared memory, a pull thread module, a distributed publish-subscribe message system, and a stream processing system, where the main processing thread module may receive a service packet sent by a caller, parse the service packet to obtain a service data stream, and send the service data stream to a message receiving queue of the distributed publish-subscribe message system, the main processing thread module may further obtain a calculation result corresponding to the service packet from the shared memory and return the calculation result to the caller, the stream processing system may obtain the service data stream from the message receiving queue, perform logical calculation on the service data stream to obtain a calculation result corresponding to the service packet, and write the calculation result into a message return queue of the distributed publish-subscribe message system, the pull thread module may obtain the computation result from the message return queue and store the computation result in the shared memory. In the scheme, by adding the main processing thread module, the pull thread module and the shared memory and combining the distributed publish-subscribe message system and the stream processing system, synchronous calling among micro-services can be converted into asynchronous messages, the scheduling overhead of the system is reduced, the efficiency and the real-time performance of service processing are improved, the system throughput is improved, and the utilization rate of server resources is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, are incorporated in and constitute a part of this specification, and are not intended to limit the specification. In the drawings:
FIG. 1 is a schematic structural diagram of a microservice architecture system according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a service data processing method based on a micro service architecture system in an embodiment of the present specification;
fig. 3 is a flowchart of a service data processing method based on a micro service architecture system in an embodiment of the present specification;
FIG. 4 is a flow chart illustrating a main processing thread in an embodiment of the present disclosure;
FIG. 5 is a schematic flow diagram of a stream processing system in an embodiment of the present description;
FIG. 6 is a flow diagram illustrating a pull thread module in an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a service data processing apparatus based on a micro service architecture system in an embodiment of the present specification;
FIG. 8 shows a schematic diagram of a computer device in one embodiment of the present description.
Detailed Description
The principles and spirit of the present description will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely to enable those skilled in the art to better understand and to implement the present description, and are not intended to limit the scope of the present description in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present description may be embodied as a system, an apparatus, a method, or a computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
The embodiment of the specification provides a micro-service architecture system. The micro-service architecture system can comprise a main processing thread module, a shared memory, a pull thread module, a distributed publish-subscribe message system and a stream processing system. The main processing thread processing module can receive the service message sent by the calling party, analyze the service message to obtain a service data stream, and send the service data stream to a message receiving queue of the distributed publish-subscribe message system. The main processing thread module can also acquire a calculation result corresponding to the service message from the shared memory and return the calculation result to the calling party. The stream processing system can acquire the service data stream from the message receiving queue, perform logic calculation on the service data stream to obtain a calculation result corresponding to the service message, and write the calculation result into a message return queue of the distributed publish-subscribe message system. The pull thread module may obtain the computation result from the message return queue and store the computation result in the shared memory.
The caller may be other microservices in the same business application or in different business applications. The main processing thread module can receive the service message sent by the calling party.
In one embodiment, the service packet may include a service identifier. The main processing thread module can analyze the received service message into a JSON format and the like to obtain a service data stream. The service data stream may include a service identifier. The main processing thread module can send the service data stream to a message receiving queue of the distributed publish-subscribe message system.
The stream processing system may read the service data stream from the message receiving queue, and perform logical computation on the service data stream to obtain a computation result. The stream processing system may send the computed result to a message receive queue of the distributed publish-subscribe message system.
The pull thread module can obtain the calculation result from the message receiving queue in real time and store the calculation result into the shared memory. The calculation result may carry a service identifier. The main processing thread may obtain the calculation result carrying the service identifier from the shared memory and return it to the corresponding caller.
In the micro-service architecture system in the embodiment, by adding the main processing thread module, the pull thread module and the shared memory and combining the distributed publish-subscribe message system and the stream processing system, synchronous calling between micro-services can be converted into asynchronous messages, so that the scheduling overhead of the system is reduced, the efficiency and the real-time performance of service processing are improved, and the system throughput and the utilization rate of server resources can be improved.
In some embodiments of the present description, the microservice architecture system may further include a distributed storage system, the distributed storage system may be configured to store business data tables, and the flow processing system may be configured to access the business data tables in the distributed storage system to perform logical computations on the business data flows.
In particular, the distributed storage system may be used to store business data tables, such as inventory tables, user account tables, and the like in an e-commerce scenario. When the stream processing system performs logic calculation on the service data stream, the service data table stored in the distributed storage system can be accessed to perform real-time correlation calculation, asynchronous calling can be reduced, atomic service is replaced, the calculation efficiency is improved, and stable operation of the micro-service architecture system can be further ensured.
In some embodiments of the present specification, the main processing thread module may be further configured to generate a thread lock corresponding to the service packet, and lock the thread into the shared memory; the pull thread module can also be used for unlocking a thread lock corresponding to the service message under the condition that the calculation result of the service message is stored in the shared memory; the main processing thread may be further configured to read the calculation result from the shared memory and return the calculation result to the caller when the thread lock is unlocked.
Specifically, after parsing the received service packet into a service data stream and sending the service data stream to the message receiving queue, the main processing thread module may generate a thread lock corresponding to the service packet and store the thread lock in the shared memory. In order to distinguish the thread locks corresponding to different service messages, the thread locks and the service identifiers in the service messages can be stored in an associated manner. And under the condition that the pulling thread module stores the calculation result corresponding to the service message into the shared memory, the pulling thread module can unlock the thread lock corresponding to the service message. The main processing thread can determine whether the thread lock corresponding to the service message is in an unlocked state in real time, and under the condition that the thread lock corresponding to the service message is determined to be in the unlocked state, the calculation result corresponding to the service message is read from the shared memory and returned to the calling party. By the method, the calculation result can be returned to the calling party in time after the service message is processed, and the service processing efficiency can be improved.
In some embodiments of the specification, the main processing thread module and/or the pull thread module are multiple concurrent. The main processing thread module and the pull thread module are multiple concurrent, and can be multiple. By adopting a multi-concurrent mode to deploy the main processing thread and/or the pull thread, the service processing efficiency can be further improved, and the utilization rate of system resources can be improved.
The above system is described below with reference to a specific embodiment, however, it should be noted that the specific embodiment is only for better describing the present specification and should not be construed as an undue limitation to the present specification.
Referring to fig. 1, a schematic structural diagram of a microservice architecture system in an embodiment of the present disclosure is shown. As shown in fig. 1, the micro-service architecture system provided by the present invention may include a main processing thread 1, a message receiving queue 2 of the distributed publish-subscribe message system, a stream processing system 3, a distributed storage system 4, a message returning queue 5 of the distributed publish-subscribe message system, a pull thread 6, and a shared memory 7.
The main processing thread 1 and the pull thread 6 may be multiple concurrent, or may be multiple. The distributed publish-subscribe message system may be a Kafka system. The stream processing system may be a Flink system. The distributed storage system may be Hbase.
The main processing thread 1 is an entrance of the micro-service, and each time the service scene is triggered, the calling party calls the micro-service and transmits the related service message to the main processing thread. The business message contains a transaction key which is used as a unique identifier of one transaction. After the main processing thread 1 analyzes the message into a JSON format, the message is written into a message receiving queue 2, and a thread lock is added to wait. And after unlocking, acquiring a return message from the shared memory 7, wherein the key is a transaction main key. And the main processing thread 1 returns the return message to the calling party to finish the calling of the micro service.
The message receiving queue 2 is used for transmitting the initial message.
The stream processing system 3 acquires the data stream from the message receiving queue 2, and calculates the data stream in real time one by one according to business logic by using a Flink operator. The stream processing system 3 performs calculation by accessing the distributed storage system 4 and the associated service data table, and writes the calculation result into the message return queue 5 in real time.
The distributed storage system 4 is used for storing business data tables, such as inventory tables, user account tables and the like in e-commerce scenarios, so that the stream processing system 3 associates computation at runtime, and asynchronous calls are reduced.
The message return queue 5 is used for transmitting return messages.
And the pull thread 6 acquires a return message from the message return queue 5, and stores the return message into the shared memory 7 by pressing a key as a transaction main key. And pressing a key as a transaction main key to release the corresponding thread lock in the shared memory 7.
The shared memory 7 is a communication area of all threads and comprises two HashMaps (HashMaps), one HashMap (lockInfo) is used for storing a thread lock of a main processing thread, and a key is a transaction main key; another hash map (return message) is used to store the return message, and key is the transaction key.
In the embodiment, a microservice architecture system based on a stream processing technology is provided. A main processing thread, a pull thread and a shared memory module are added in the microservice, and the synchronous calling is changed into asynchronous information by combining flow processing technologies such as Kafka and Flink, so that the system throughput and the server resource utilization rate are improved. Meanwhile, the associated Hbase in the Flink engine is calculated in real time to replace atomic service, so that the calling amount is reduced, and the stable operation of a micro-service architecture system can be ensured.
The embodiment of the specification provides a service data processing method based on a micro-service architecture system, and the micro-service architecture system can comprise a main processing thread module, a shared memory, a pull thread module, a distributed publish-subscribe message system and a stream processing system. Fig. 2 shows a flowchart of a method for processing service data based on a microservice architecture system in an embodiment of the present specification. Although the present specification provides method operational steps or apparatus configurations as illustrated in the following examples or figures, more or fewer operational steps or modular units may be included in the methods or apparatus based on conventional or non-inventive efforts. In the case of steps or structures which do not logically have the necessary cause and effect relationship, the execution sequence of the steps or the module structure of the apparatus is not limited to the execution sequence or the module structure described in the embodiments and shown in the drawings. When the described method or module structure is applied in an actual device or end product, the method or module structure according to the embodiments or shown in the drawings can be executed sequentially or executed in parallel (for example, in a parallel processor or multi-thread processing environment, or even in a distributed processing environment).
Specifically, as shown in fig. 2, a method for processing service data based on a micro service architecture system provided in an embodiment of the present specification may include the following steps:
step S201, the main processing thread module receives a service packet sent by a caller, parses the service packet to obtain a service data stream, and sends the service data stream to a message receiving queue of the distributed publish-subscribe message system.
Step S202, the stream processing system obtains the service data stream from the message receiving queue, performs logic calculation on the service data stream to obtain a calculation result, and writes the calculation result into a message return queue of the distributed publish-subscribe message system.
In step S203, the pull thread module obtains the calculation result from the message return queue, and stores the calculation result in the shared memory.
Step S204, the main processing thread module obtains the calculation result from the shared memory, and returns the calculation result to the caller.
The embodiment of the present specification further provides a service data processing method based on a micro service architecture system, where the micro service architecture system may include a main processing thread module, a shared memory, a pull thread module, a distributed publish-subscribe message system, and a stream processing system, and the method is applied to the main processing thread module. Fig. 3 is a flowchart illustrating a method for processing service data based on a microservice architecture system in an embodiment of the present specification.
Specifically, as shown in fig. 3, a method for processing service data based on a micro service architecture system provided in an embodiment of the present specification may include the following steps:
step S301, receiving a service message sent by a calling party.
Step S302, analyzing the service message to obtain a service data stream, and sending the service data stream to a message receiving queue of the distributed publish-subscribe message system.
Step S303, reading a calculation result corresponding to the service packet from the shared memory, and returning the calculation result to the caller.
The flow processing system obtains the service data flow from the message receiving queue and performs logic calculation on the service data flow, and the flow processing system stores the calculation result to a message return queue of the distributed publish-subscribe message system, and the pull thread module obtains the calculation result from the message return queue and stores the calculation result to the shared memory.
In some embodiments of the present specification, after sending the service data stream to a message receiving queue of a distributed publish-subscribe message system, the method may further include: generating a thread lock corresponding to the service message, and storing the thread lock into the shared memory; correspondingly, reading the calculation result corresponding to the service packet from the shared memory may include: determining whether a thread lock in the shared memory is in an unlocked state; and under the condition that the thread lock in the shared memory is determined to be in an unlocked state, reading a calculation result corresponding to the service message from the shared memory, wherein under the condition that the calculation result is stored in the shared memory, the thread lock corresponding to the service message in the shared memory is unlocked.
The above method is described below with reference to a specific example, however, it should be noted that the specific example is only for better describing the present specification and should not be construed as an undue limitation on the present specification.
Referring to fig. 4, a flow chart of a main processing thread in the embodiment of the present specification is shown. As shown in FIG. 4, the flow of the call of the primary microservice through the main processing thread 1 includes the following steps:
step S401: and triggering a service scene, and receiving a service message sent by a calling party.
Step S402: and analyzing the service message into a JSON format, and acquiring a transaction main key from the service message.
Step S403: and writing the analyzed initial message into a Kafka receiving queue.
Step S404: adding a thread lock, and locking the thread into the lockInfo of the shared memory, wherein the key is a transaction main key.
Step S405: and the thread is suspended and enters a waiting state.
Step S406: judging whether the thread lock is unlocked, and if so, executing the step S407; otherwise, step S405 is executed.
Step S407: and reading the return message from the shared memory return message, wherein the key is a main transaction key.
Step S408: and returning the return message to the calling party.
Referring to fig. 5, a flow chart of the stream processing system according to the embodiment of the present disclosure is shown. As shown in FIG. 5, the flow of calling the micro service through the stream processing system includes the following steps:
step S501: the data stream is obtained from the Kafka receive queue.
Step S502: performing logic calculation by using a Flink operator; and accessing Hbase, and correlating the required service table for real-time calculation.
Step S503: and writing the calculation result into a Kafka return queue.
Referring to FIG. 6, a flow diagram of a pull thread in an embodiment of the present disclosure is shown. As shown in FIG. 6, the flow of a microservice call through a pull thread includes the following steps:
step S601: and acquiring a return message from the Kafka return queue.
Step S602: and storing the return message into the shared memory return message, wherein the key is a main transaction key.
Step S603: finding the thread lock with the key as the transaction main key in the shared memory lockInfo, and releasing the thread lock.
In the above embodiment, a method for processing a service of a microservice architecture system based on a stream processing technology is provided, where a main processing thread, a pull thread, and a shared memory module are added to a microservice, and a stream processing technology such as Kafka and Flink is combined to change synchronous invocation into an asynchronous message, thereby improving system throughput and server resource utilization. Meanwhile, the associated Hbase in the Flink engine is calculated in real time to replace atomic service, so that the calling amount is reduced, and the stable operation of a micro-service architecture system can be ensured.
Based on the same inventive concept, an embodiment of the present specification further provides a service data processing apparatus based on a micro service architecture system, where the micro service architecture system includes a main processing thread module, a shared memory, a pull thread module, a distributed publish-subscribe message system, and a stream processing system, and the apparatus is applied to the main processing thread module, as described in the following embodiments. Because the principle of the micro-service architecture system-based service data processing device for solving the problems is similar to the micro-service architecture system-based service data processing method, the implementation of the micro-service architecture system-based service data processing device can refer to the implementation of the micro-service architecture system-based service data processing method, and repeated parts are not described again. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated. Fig. 7 is a block diagram of a service data processing apparatus based on a microservice architecture system according to an embodiment of the present specification, and as shown in fig. 7, the apparatus includes: the receiving module 701, the analyzing module 702, and the returning module 703 describe the following configuration.
The receiving module 701 is configured to receive a service packet sent by a calling party.
The parsing module 702 is configured to parse the service packet to obtain a service data stream, and send the service data stream to a message receiving queue of the distributed publish-subscribe message system.
The return module 703 is configured to read a calculation result corresponding to the service packet from the shared memory, and return the calculation result to the caller;
the flow processing system obtains the service data flow from the message receiving queue and performs logic calculation on the service data flow, and the flow processing system stores the calculation result to a message return queue of the distributed publish-subscribe message system, and the pull thread module obtains the calculation result from the message return queue and stores the calculation result to the shared memory.
In some embodiments of the present description, the parsing module may be further configured to: after the service data stream is sent to a message receiving queue of a distributed publish-subscribe message system, generating a thread lock corresponding to the service message, and storing the thread lock into the shared memory; correspondingly, the return module may be specifically configured to: determining whether a thread lock in the shared memory is in an unlocked state; and under the condition that the thread lock in the shared memory is determined to be in an unlocked state, reading a calculation result corresponding to the service message from the shared memory, wherein under the condition that the calculation result is stored in the shared memory, the thread lock corresponding to the service message in the shared memory is unlocked.
From the above description, it can be seen that the embodiments of the present specification achieve the following technical effects: by adding the main processing thread module, the pull thread module and the shared memory and combining the distributed publish-subscribe message system and the stream processing system, synchronous calling among micro services can be converted into asynchronous messages, the scheduling overhead of the system is reduced, the efficiency and the real-time performance of service processing are improved, the system throughput can be improved, and the utilization rate of server resources is improved.
The embodiment of the present specification further provides a computer device, which may specifically refer to a schematic structural diagram of a computer device based on the service data processing method of the micro service architecture system provided in the embodiment of the present specification, shown in fig. 8, where the computer device may specifically include an input device 81, a processor 82, and a memory 83. Wherein the memory 83 is configured to store processor-executable instructions. The processor 82, when executing the instructions, implements the steps of the method for processing service data based on the micro service architecture system according to any of the embodiments.
In this embodiment, the input device may be one of the main apparatuses for information exchange between a user and a computer system. The input device may include a keyboard, a mouse, a camera, a scanner, a light pen, a handwriting input board, a voice input device, etc.; the input device is used to input raw data and a program for processing the data into the computer. The input device can also acquire and receive data transmitted by other modules, units and devices. The processor may be implemented in any suitable way. For example, the processor may take the form of, for example, a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, an embedded microcontroller, and so forth. The memory may in particular be a memory device used in modern information technology for storing information. The memory may include multiple levels, and in a digital system, the memory may be any memory as long as it can store binary data; in an integrated circuit, a circuit without a physical form and with a storage function is also called a memory, such as a RAM, a FIFO and the like; in the system, the storage device in physical form is also called a memory, such as a memory bank, a TF card and the like.
In this embodiment, the functions and effects of the specific implementation of the computer device can be explained in comparison with other embodiments, and are not described herein again.
The present specification further provides a computer storage medium of a micro service architecture system based service data processing method, where the computer storage medium stores computer program instructions, and the computer program instructions, when executed, implement the steps of the micro service architecture system based service data processing method in any of the above embodiments.
In this embodiment, the storage medium includes, but is not limited to, a Random Access Memory (RAM), a Read-Only Memory (ROM), a Cache (Cache), a Hard Disk Drive (HDD), or a Memory Card (Memory Card). The memory may be used to store computer program instructions. The network communication unit may be an interface for performing network connection communication, which is set in accordance with a standard prescribed by a communication protocol.
In this embodiment, the functions and effects specifically realized by the program instructions stored in the computer storage medium can be explained by comparing with other embodiments, and are not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the embodiments of the present specification described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed over a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different from that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, embodiments of the present description are not limited to any specific combination of hardware and software.
It is to be understood that the above description is intended to be illustrative, and not restrictive. Many embodiments and many applications other than the examples provided will be apparent to those of skill in the art upon reading the above description. The scope of the description should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above description is only a preferred embodiment of the present disclosure, and is not intended to limit the present disclosure, and it will be apparent to those skilled in the art that various modifications and variations can be made in the embodiment of the present disclosure. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present specification shall be included in the protection scope of the present specification.
Claims (10)
1. A microservice architecture system, comprising: the system comprises a main processing thread module, a shared memory, a pulling thread module, a distributed publishing and subscribing message system and a stream processing system; wherein,
the main processing thread processing module is used for receiving a service message sent by a calling party, analyzing the service message to obtain a service data stream, and sending the service data stream to a message receiving queue of the distributed publish-subscribe message system; the main processing thread module is further configured to obtain a calculation result corresponding to the service packet from the shared memory, and return the calculation result to the caller;
the stream processing system is used for acquiring the service data stream from the message receiving queue, performing logic calculation on the service data stream to obtain a calculation result corresponding to the service message, and writing the calculation result into a message return queue of the distributed publish-subscribe message system;
and the pull thread module is used for acquiring the calculation result from the message return queue and storing the calculation result into the shared memory.
2. The micro service architecture system of claim 1, further comprising a distributed storage system configured to store business data tables, wherein the stream processing system is configured to access the business data tables in the distributed storage system for performing logical computations on the business data streams.
3. The micro-service architecture system according to claim 1, wherein the main processing thread module is further configured to generate a thread lock corresponding to the service packet, and latch the thread into the shared memory;
the pull thread module is also used for unlocking a thread lock corresponding to the service message under the condition that the calculation result of the service message is stored in the shared memory;
the main processing thread module is further configured to read the calculation result from the shared memory and return the calculation result to the caller when the thread lock is unlocked.
4. The micro service architecture system of claim 1, wherein the main processing thread module and/or the pull thread module are multi-concurrent.
5. A business data processing method based on a micro-service architecture system is characterized in that the micro-service architecture system comprises a main processing thread module, a shared memory, a pull thread module, a distributed publishing and subscribing message system and a stream processing system, and the method comprises the following steps:
the main processing thread module receives a service message sent by a calling party, analyzes the service message to obtain a service data stream, and sends the service data stream to a message receiving queue of the distributed publish-subscribe message system;
the stream processing system acquires the service data stream from the message receiving queue, performs logic calculation on the service data stream to obtain a calculation result, and writes the calculation result into a message return queue of the distributed publish-subscribe message system;
the pull thread module obtains the calculation result from the message return queue and stores the calculation result into the shared memory;
and the main processing thread module acquires the calculation result from the shared memory and returns the calculation result to the caller.
6. A business data processing method based on a micro-service architecture system is characterized in that the micro-service architecture system comprises a main processing thread module, a shared memory, a pull thread module, a distributed publishing and subscribing message system and a stream processing system, and the method is applied to the main processing thread module and comprises the following steps:
receiving a service message sent by a calling party;
analyzing the service message to obtain a service data stream, and sending the service data stream to a message receiving queue of the distributed publish-subscribe message system;
reading a calculation result corresponding to the service message from the shared memory, and returning the calculation result to the calling party;
the flow processing system obtains the service data flow from the message receiving queue and performs logic calculation on the service data flow, and the flow processing system stores the calculation result to a message return queue of the distributed publish-subscribe message system, and the pull thread module obtains the calculation result from the message return queue and stores the calculation result to the shared memory.
7. The method of claim 6, wherein after sending the traffic data stream to a message receive queue of a distributed publish-subscribe message system, further comprising:
generating a thread lock corresponding to the service message, and storing the thread lock into the shared memory;
correspondingly, reading the calculation result corresponding to the service packet from the shared memory includes:
determining whether a thread lock in the shared memory is in an unlocked state;
and under the condition that the thread lock in the shared memory is determined to be in an unlocked state, reading a calculation result corresponding to the service message from the shared memory, wherein under the condition that the calculation result is stored in the shared memory, the thread lock corresponding to the service message in the shared memory is unlocked.
8. A business data processing device based on a micro-service architecture system is characterized in that the micro-service architecture system comprises a main processing thread module, a shared memory, a pull thread module, a distributed publishing and subscribing message system and a stream processing system, and the device is applied to the main processing thread module and comprises:
the receiving module is used for receiving the service message sent by the calling party;
the analysis module is used for analyzing the service message to obtain a service data stream and sending the service data stream to a message receiving queue of the distributed publish-subscribe message system;
a return module, configured to read a calculation result corresponding to the service packet from the shared memory, and return the calculation result to the caller;
the flow processing system obtains the service data flow from the message receiving queue and performs logic calculation on the service data flow, and the flow processing system stores the calculation result to a message return queue of the distributed publish-subscribe message system, and the pull thread module obtains the calculation result from the message return queue and stores the calculation result to the shared memory.
9. A computer device comprising a processor and a memory for storing processor-executable instructions that, when executed by the processor, implement the steps of the method of any one of claims 6 to 7.
10. A computer-readable storage medium having computer instructions stored thereon which, when executed, implement the steps of the method of any one of claims 6 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110823258.1A CN113553153A (en) | 2021-07-21 | 2021-07-21 | Service data processing method and device and micro-service architecture system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110823258.1A CN113553153A (en) | 2021-07-21 | 2021-07-21 | Service data processing method and device and micro-service architecture system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113553153A true CN113553153A (en) | 2021-10-26 |
Family
ID=78103672
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110823258.1A Pending CN113553153A (en) | 2021-07-21 | 2021-07-21 | Service data processing method and device and micro-service architecture system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113553153A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114579332A (en) * | 2022-03-04 | 2022-06-03 | 北京感易智能科技有限公司 | Text processing system, method and equipment capable of dynamically configuring operators |
CN114679503A (en) * | 2022-03-24 | 2022-06-28 | 长桥有限公司 | Market data processing method, system and equipment |
CN114741206A (en) * | 2022-06-09 | 2022-07-12 | 深圳华锐分布式技术股份有限公司 | Client data playback processing method, device, equipment and storage medium |
CN116627681A (en) * | 2023-07-25 | 2023-08-22 | 太平金融科技服务(上海)有限公司 | Service request processing method, device, computer equipment, medium and program product |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050021622A1 (en) * | 2002-11-26 | 2005-01-27 | William Cullen | Dynamic subscription and message routing on a topic between publishing nodes and subscribing nodes |
CN104092767A (en) * | 2014-07-21 | 2014-10-08 | 北京邮电大学 | A publish/subscribe system with added message queue model and its working method |
-
2021
- 2021-07-21 CN CN202110823258.1A patent/CN113553153A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050021622A1 (en) * | 2002-11-26 | 2005-01-27 | William Cullen | Dynamic subscription and message routing on a topic between publishing nodes and subscribing nodes |
CN104092767A (en) * | 2014-07-21 | 2014-10-08 | 北京邮电大学 | A publish/subscribe system with added message queue model and its working method |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114579332A (en) * | 2022-03-04 | 2022-06-03 | 北京感易智能科技有限公司 | Text processing system, method and equipment capable of dynamically configuring operators |
CN114679503A (en) * | 2022-03-24 | 2022-06-28 | 长桥有限公司 | Market data processing method, system and equipment |
CN114741206A (en) * | 2022-06-09 | 2022-07-12 | 深圳华锐分布式技术股份有限公司 | Client data playback processing method, device, equipment and storage medium |
CN114741206B (en) * | 2022-06-09 | 2022-09-06 | 深圳华锐分布式技术股份有限公司 | Client data playback processing method, device, equipment and storage medium |
CN116627681A (en) * | 2023-07-25 | 2023-08-22 | 太平金融科技服务(上海)有限公司 | Service request processing method, device, computer equipment, medium and program product |
CN116627681B (en) * | 2023-07-25 | 2023-10-17 | 太平金融科技服务(上海)有限公司 | Service request processing method, device, computer equipment, medium and program product |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113553153A (en) | Service data processing method and device and micro-service architecture system | |
US8238350B2 (en) | Message batching with checkpoints systems and methods | |
US9736034B2 (en) | System and method for small batching processing of usage requests | |
CN108776934A (en) | Distributed data computational methods, device, computer equipment and readable storage medium storing program for executing | |
US20150081996A1 (en) | Pauseless garbage collector write barrier | |
US20120216215A1 (en) | Method and system for processing data for preventing deadlock | |
CN110727507B (en) | Message processing method and device, computer equipment and storage medium | |
CN113422842A (en) | Distributed power utilization information data acquisition system considering network load | |
US10778512B2 (en) | System and method for network provisioning | |
US10360057B1 (en) | Network-accessible volume creation and leasing | |
CN102801737A (en) | Asynchronous network communication method and device | |
US20030158883A1 (en) | Message processing | |
CN109600240A (en) | Group Communications method and device | |
CN112035255A (en) | Thread pool resource management task processing method, device, equipment and storage medium | |
Simoncelli et al. | Stream-monitoring with blockmon: convergence of network measurements and data analytics platforms | |
CN113127204A (en) | Method and server for processing concurrent services based on reactor network model | |
CN109327321B (en) | Network model service execution method and device, SDN controller and readable storage medium | |
CN111984505A (en) | Operation and maintenance data acquisition engine and acquisition method | |
CN107276912B (en) | Memory, message processing method and distributed storage system | |
CN113626221A (en) | Message enqueuing method and device | |
CN112019452B (en) | Method, system and related device for processing service requirement | |
Li et al. | Modeling message queueing services with reliability guarantee in cloud computing environment using colored petri nets | |
Oliveira et al. | IMCReo: interactive Markov chains for stochastic Reo | |
Praphamontripong et al. | An analytical approach to performance analysis of an asynchronous web server | |
US9069625B2 (en) | Method of parallel processing of ordered data streams |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |