[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114461139B - Service processing method, device, computer equipment and storage medium - Google Patents

Service processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN114461139B
CN114461139B CN202111643867.5A CN202111643867A CN114461139B CN 114461139 B CN114461139 B CN 114461139B CN 202111643867 A CN202111643867 A CN 202111643867A CN 114461139 B CN114461139 B CN 114461139B
Authority
CN
China
Prior art keywords
service
processing
sequence list
tasks
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111643867.5A
Other languages
Chinese (zh)
Other versions
CN114461139A (en
Inventor
刘银齐
王云飞
吴瑞强
王慧
吴清波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Zhongke Shuguang Storage Technology Co ltd
Original Assignee
Tianjin Zhongke Shuguang Storage Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Zhongke Shuguang Storage Technology Co ltd filed Critical Tianjin Zhongke Shuguang Storage Technology Co ltd
Priority to CN202111643867.5A priority Critical patent/CN114461139B/en
Publication of CN114461139A publication Critical patent/CN114461139A/en
Application granted granted Critical
Publication of CN114461139B publication Critical patent/CN114461139B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0674Disk device
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present application relates to a service processing method, apparatus, computer device, storage medium and computer program product. The method comprises the following steps: acquiring a business operation task; each business operation task corresponds to a business operation type; according to the service address number carried by the service operation task, carrying out aggregation sequencing treatment on the service operation task to obtain a sequence list corresponding to each aggregation treatment result; and processing the business operation tasks contained in the sequence list in sequence according to the processing tokens held by the mechanical disc delivery threads. The method provides service processing efficiency.

Description

Service processing method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of data storage technology, and in particular, to a service processing method, apparatus, computer device, storage medium, and computer program product.
Background
The traditional mechanical hard disk mainly comprises: the disk, magnetic head, disk rotating shaft and control motor, magnetic head controller, data converter, interface and buffer memory. The magnetic head is used for positioning the magnetic head at a specified position of the disc to perform data reading and writing operation.
Because the mechanical hard disk uses the swing of the magnetic head to realize the reading and writing of data, compared with the sequential reading and writing, the performance of random reading and writing according to the front-end service designated position is poorer. In order to improve the read-write performance of the mechanical hard disk, linux Bcache blocks of equipment cache and use the high-performance solid state hard disk as cache equipment of the mechanical hard disk, front-end service data are written into the solid state hard disk first, and then are refreshed (written back) to the mechanical hard disk. Therefore, the high performance of upper application can be realized, and the performance advantage of continuous processing of the back-end mechanical hard disk can be exerted. Meanwhile, if the cache resource of the solid state disk is insufficient, in order to meet the front-end service processing performance, the bypass solid state disk also exists, namely, under the condition that the solid state disk is insufficient in cache, the front-end service operation bypasses the solid state disk and directly writes data into the mechanical hard disk.
However, in the processing mechanism for optimizing the service data of the mechanical disk based on the current block device cache, the mutual influence on the service processing sequence between the bypass service processing mode and the writeback service processing mode is not considered, and the bypass service processing mode breaks the sequency of the writeback service processing mode, so that the magnetic head swings greatly, the service data processing time is occupied, and the processing efficiency of the mechanical hard disk is reduced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a business processing method, apparatus, computer device, computer readable storage medium, and computer program product.
In a first aspect, the present application provides a service processing method, where the method includes:
Acquiring a business operation task;
according to the service address number carried by the service operation task, carrying out aggregation sequencing treatment on the service operation task to obtain a sequence list corresponding to each aggregation sequencing treatment result;
and processing the business operation tasks contained in the sequence list in sequence according to the processing tokens held by the mechanical disc delivery threads.
By adopting the method, the service processing sequence of the mechanical disc is re-planned by carrying out aggregation sequencing processing on various service operation tasks, so that the time that the magnetic head is greatly swung and does not work is reduced, and the service processing efficiency is improved.
In one embodiment, each business operation task corresponds to a business operation type, where the business operation type includes a write operation type, a read operation type and a brush operation type, and the business operation task includes a write operation task, a read operation task and a brush operation task; after the service operation task is obtained, before the service operation task is subjected to aggregation and sorting processing according to the service address number carried by the service operation task to obtain a sequence list corresponding to each aggregation and sorting processing result, the method further comprises:
adding the write operation task and the read operation task to a write operation queue and a read operation queue respectively;
Acquiring a current brushing data amount threshold, and adding the brushing operation task which is equal to the current brushing data amount threshold into a brushing operation queue so as to perform aggregation sequencing treatment on the business operation tasks in the writing operation queue, the reading operation queue and the brushing operation queue; the current back-flushing data quantity threshold value is obtained according to the service processing speed of the mechanical disc.
In this embodiment, by adding the service operation tasks of different service operation types to different service operation queues, each service flow independently shares a service operation queue under the condition that locking operation must be performed on each service flow in the operation queue, so as to reduce granularity of locks in the operation queue. Meanwhile, the business operation tasks with different business operation types are respectively added to different business operation queues, so that business processes of the business operation tasks with different types are mutually independent, the processing time of each queue is shortened, and the overall processing performance of the mechanical disk is improved.
In one embodiment, the performing, according to the service address number carried by the service operation task, aggregation and sorting processing on the service operation task to obtain a sequence list corresponding to each aggregation and sorting processing result includes:
According to the service address numbers carried by the service operation tasks, sequencing the service operation tasks, determining the processing sequence of all the service operation tasks, and adding the service operation tasks into a mechanical disc delivery thread queue according to the processing sequence;
And carrying out aggregation processing on each business operation task contained in the mechanical disc delivery thread queue to obtain a sequence list corresponding to each aggregation sequencing processing result.
In this embodiment, by sequencing the service operation tasks of different service operation types and performing aggregation processing on the sequenced service operation tasks, determining a sequence list of each aggregation processing result after aggregation processing, sequencing and splitting each service operation task, so as to process each sequence list through a mechanical disc delivery thread, and improve the processing efficiency of the mechanical disc delivery thread.
In one embodiment, the aggregating processing is performed on each service operation task included in the mechanical disk delivery thread queue to obtain a sequence list corresponding to each aggregate ordering processing result, including:
Based on the processing sequence of all the business operation tasks contained in the mechanical disc queue, carrying out aggregation processing on all the business operation tasks to obtain a plurality of aggregation processing results;
And determining the business operation task contained in each aggregation processing result as a sequence list to obtain a plurality of sequence lists.
In this embodiment, the aggregate processing results are obtained by performing aggregate sorting processing on the operation tasks in the mechanical disc delivery thread queue, so that service operation compactness is further provided between service operation tasks included in each aggregate sorting processing result, the swing time of the magnetic head is reduced, and the service processing performance of the mechanical disc is improved.
In one embodiment, after the aggregating and sorting the service operation tasks according to the service address numbers carried by the service operation tasks to obtain a sequence list corresponding to each aggregating and processing result, before the processing tokens held by the mechanical disc delivery thread sequentially process the service operation tasks included in the sequence list, the method further includes:
judging whether a overtime sequence list exists in each sequence list according to the operation time information corresponding to the sequence list;
if the overtime sequence list does not exist, determining the processing priority of each sequence list according to the corresponding continuity, service priority and operation time information of each sequence list;
And determining the target sequence list with the highest processing priority from the sequence lists, and delivering the target sequence list to the mechanical disc.
In this embodiment, the processing priority of the multiple sequential lists after the aggregation ordering processing is determined, so as to determine the processing sequence of the multiple sequential lists, and the computer device processes the sequential lists in turn according to the determined processing sequence, so that the service operation time of the service operation task included in each sequential processing list is tighter, and the service address distances are similar, thereby improving the service processing efficiency of the mechanical disk.
In one embodiment, the method further comprises:
And if the overtime sequence list exists, the overtime sequence list is used as a target sequence list, and the delivery is carried out to the mechanical disc.
In one embodiment, each service operation task corresponds to a service operation type priority, and determining the processing priority of each sequential list according to the continuity, the service priority and the operation time information corresponding to each sequential list includes:
Counting the size of the storage space of the mechanical disc to be operated of all the business operation tasks in each sequence list, and taking the size of the storage space of the mechanical disc to be operated of the business operation tasks as the continuity of the sequence list;
Taking the priority corresponding to the service operation type of the first target service operation task in each sequence list as the service priority of the sequence list; the first target business operation task is a business operation task with the highest business operation type priority in the sequence list;
Taking the service operation time corresponding to the second target service operation task in each sequence list as operation time information of the sequence list; the second target business operation task is a business operation task with earliest operation time in the sequence list;
and determining the processing priority of each sequence list according to the continuity, the service priority, the operation time information and the preset priority algorithm corresponding to each sequence list.
In this embodiment, the processing priority of each sequential list is determined by determining the continuity, the service priority and the operation time information corresponding to each sequential list, so that the computer device may process each sequential list based on the processing priority, and timely feedback the processing information, thereby improving the front-end service processing performance and the mechanical disk processing efficiency.
In one embodiment, the processing the service operation tasks included in the sequence list sequentially according to the processing tokens held by the mechanical disc delivery thread includes:
judging whether a processing token exists in the current mechanical disc delivery thread or not;
If the processing token exists, the processing token is obtained, and the current business operation task to be processed in the sequence list is processed;
judging whether the service operation task to be processed exists in the sequence list, if so, executing the step of judging whether the processing token exists in the current mechanical disc delivery thread or not until all the service operation tasks to be processed in the sequence list are processed.
In this embodiment, for a plurality of sequential lists obtained after aggregation processing, processing is performed on service operation tasks included in each sequential list according to a processing token mechanism in a mechanical disc, so as to improve processing performance of front-end service operation tasks.
In a second aspect, the present application further provides a service processing apparatus, where the apparatus includes:
The acquisition module is used for acquiring a business operation task; each business operation task corresponds to a business operation type;
The aggregation ordering module is used for carrying out aggregation ordering processing on the service operation tasks according to the service address numbers carried by the service operation tasks to obtain a sequence list corresponding to each aggregation processing result;
and the processing module is used for processing the business operation tasks contained in the sequence list according to the processing tokens delivered by the mechanical disc.
By adopting the method, the service processing sequence of the mechanical disc is re-planned by sequencing and aggregating various service operation tasks, the time that the magnetic head swings greatly and does not work is reduced, and the service processing efficiency is improved.
In a third aspect, the present application also provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring a business operation task; each business operation task corresponds to a business operation type;
According to the service address number carried by the service operation task, carrying out aggregation sequencing treatment on the service operation task to obtain a sequence list corresponding to each aggregation treatment result;
and processing the business operation tasks contained in the sequence list in sequence according to the processing tokens delivered by the mechanical disc.
By adopting the method, the service processing sequence of the mechanical disc is re-planned by sequencing and aggregating various service operation tasks, the time that the magnetic head swings greatly and does not work is reduced, and the service processing efficiency is improved.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a business operation task; each business operation task corresponds to a business operation type;
According to the service address number carried by the service operation task, carrying out aggregation sequencing treatment on the service operation task to obtain a sequence list corresponding to each aggregation treatment result;
and processing the business operation tasks contained in the sequence list in sequence according to the processing tokens delivered by the mechanical disc.
By adopting the method, the service processing sequence of the mechanical disc is re-planned by sequencing and aggregating various service operation tasks, the time that the magnetic head swings greatly and does not work is reduced, and the service processing efficiency is improved.
In a fifth aspect, the present application also provides a computer program product comprising a computer program which when executed by a processor performs the steps of:
acquiring a business operation task; each business operation task corresponds to a business operation type;
According to the service address number carried by the service operation task, carrying out aggregation sequencing treatment on the service operation task to obtain a sequence list corresponding to each aggregation treatment result;
and processing the business operation tasks contained in the sequence list in sequence according to the processing tokens delivered by the mechanical disc.
The business processing method, the business processing device, the computer equipment, the storage medium and the computer program product acquire business operation tasks. Wherein each business operation task corresponds to a business operation type. And according to the service address numbers carried by the service operation tasks, carrying out aggregation sequencing processing on the service operation tasks to obtain a sequence list corresponding to each aggregation processing result. And processing the business operation tasks contained in the sequence list in sequence according to the processing tokens held by the mechanical disc delivery threads. By adopting the method, the service processing sequence of the mechanical disc is re-planned by sequencing and aggregating various service operation tasks, the time that the magnetic head swings greatly and does not work is reduced, and the service processing efficiency is improved.
Drawings
FIG. 1 is a flow diagram of a business processing method in one embodiment;
FIG. 2 is a schematic diagram of a front end business operational task read/write to a designated location on a mechanical disk in one embodiment;
FIG. 3 is a flow chart of the processing steps of different business operations tasks in different business operations queues in one embodiment;
FIG. 4 is a flowchart illustrating an aggregate ordering process for business operations tasks in one embodiment;
FIG. 5 is a schematic diagram of a conventional business operations task in one embodiment performing an operational location in a mechanical disk;
FIG. 6 is a flowchart showing the aggregate processing steps for all business operations tasks, in one embodiment;
FIG. 7 is a schematic diagram of a plurality of sequential lists aggregated in one embodiment;
FIG. 8 is a flow diagram of a process for determining a target order list in one embodiment;
FIG. 9 is a flow diagram of the process priority steps for determining each sequential list in one embodiment;
FIG. 10 is a flowchart showing steps for delivering a processing token to a business operations task processing step in one embodiment;
FIG. 11 is a flowchart of an example of a method for processing services in one embodiment;
FIG. 12 is a block diagram of a business processing device in one embodiment;
Fig. 13 is an internal structural view of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in fig. 1, a service processing method is provided, where this embodiment is applied to a server for illustration, it is understood that the method may also be applied to a system including a terminal and a server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
Step 102, obtaining a business operation task.
Wherein each business operation task corresponds to a business operation type.
In implementation, a user initiates a request of a service operation task through a front-end operation interface, and a computer device responds to the request of the service operation task to acquire a service operation task (IO). The business operations tasks may correspond to a variety of types. For example, bypass operation type (bypass_io) and backwash operation type (writeback IO). The bypass operation types include (bypass_read IO) and (bypass_write IO)
The bypass operation type service operation task directly bypasses the cache device to perform a read or write operation on the mechanical disc (i.e., the mechanical hard disc is abbreviated as the mechanical hard disc in the embodiment of the present application, and is not described in detail later), and as shown in fig. 2, the bypass operation task request sent by the front end has bypass_0, bypass_1, bypass_2, bypass_3 and bypass_4. For the 5 bypass operation tasks, the computer equipment correspondingly determines the operation position of the bypass operation tasks in the mechanical disc (hdd_position) according to the service address number carried by each service operation task. I.e. the corresponding head position in fig. 2, the computer device in turn performs the corresponding business operation via the mechanical disc head.
Aiming at the brushing operation tasks, the computer equipment firstly caches the brushing operation tasks into the cache equipment, and then brushes the data to be brushed in the cache equipment back into the mechanical disk according to the service address code carried by each brushing operation task so as to realize the processing of the service operation task data.
And 104, carrying out aggregation sequencing treatment on the service operation tasks according to the service address numbers carried by the service operation tasks to obtain a sequence list corresponding to each aggregation treatment result.
In implementation, each service operation task carries an address number (called a service address number) of a corresponding service operation address, and according to the service address number carried by each service operation task, the computer equipment reorders all service operation tasks contained in different service processing queues according to the service address numbers to determine the processing sequence of all service operation tasks, and then performs aggregation processing on all ordered service operation tasks to obtain an aggregation processing result after aggregation processing. Finally, the computer equipment generates a corresponding sequence list according to each aggregation processing result.
Wherein the reordered processing order of all the traffic operation tasks is based on a pseudo-order of the traffic address numbers, i.e. the traffic address numbers contained in the processing order are not completely consecutive, but the reordered traffic processing order follows the order of the traffic address numbers from small to large or from large to small.
And 106, processing the business operation tasks contained in the sequence list in turn according to the processing tokens delivered by the mechanical disc.
In implementation, a plurality of processing tokens (token) exist in the mechanical disk, and based on a delivery mechanism of the processing tokens preset in the mechanical disk, the mechanical disk can control the number of storage objects (i.e. service operation tasks IO to be processed) in the mechanical disk storage system in unit time, that is, when the service operation tasks received in the mechanical disk correspond to the processing tokens, the IO requests of the service operation tasks can be processed. In general, the number of corresponding processing tokens in a mechanical disk storage system is typically equal to the number of IO requests that the storage system can process synchronously.
Aiming at the determined current to-be-processed sequence list, a plurality of service operation tasks are included in the sequence list, and service operation task IO requests which acquire the processing tokens are sequentially processed based on a processing token delivery mechanism of the mechanical disk.
In the above service processing method, the computer device acquires a service operation task. Wherein each business operation task corresponds to a business operation type. And then, the computer equipment performs sequencing processing on the service operation tasks according to the service address numbers carried by the service operation tasks, determines the processing sequence of all the service operation tasks, and adds the service operation tasks into the mechanical disc delivery thread queue according to the processing sequence. Furthermore, the computer equipment performs aggregation processing on each business operation task contained in the mechanical disc delivery thread queue to obtain a sequence list corresponding to each aggregation processing result. And finally, the computer equipment sequentially processes the business operation tasks contained in the sequence list according to the processing tokens delivered by the mechanical disc. By adopting the method, various service operation tasks are sequenced through service address numbers, the processing sequence of the service operation tasks of the mechanical disk is re-planned, the service operation tasks are subjected to aggregation processing based on the processing sequence, the sequence list corresponding to the aggregation processing result is obtained, each sequence list is processed respectively, the time that the magnetic head swings greatly and does not work is reduced, and the service processing efficiency is improved.
In one embodiment, as shown in fig. 3, since the service operation types corresponding to the service operation tasks include a write operation type, a read operation type, and a brush operation type, the corresponding service operation tasks include a write operation task, a read operation task, and a brush operation task. Therefore, for different types of business operation tasks, a plurality of different types of business queues can be pre-established, business operation tasks corresponding to different business operation flows are added to the different types of business operation queues for asynchronous processing, specifically, after step 102, before step 104, the method further comprises the following steps:
in step 302, the write operation task and the read operation task are added to the write operation queue and the read operation queue, respectively.
In implementation, the computer device adds a write operation task (write IO) and a read operation task (read IO) to a write operation queue (write_queue) and a read operation queue (read_queue), respectively. Specifically, since the front-end bypass service operation task needs to respond in time, if a corresponding front-end write operation task is received, the computer device unconditionally adds the write operation task to the write operation queue, and if a corresponding front-end read operation task is received, the computer device unconditionally adds the read operation task to the read operation queue.
Step 304, the current threshold value of the data amount is obtained, and the brushing operation task which is equal to the current threshold value of the data amount is added into the brushing operation queue, so as to perform aggregation sequencing processing on the business operation tasks in the writing operation queue, the reading operation queue and the brushing operation queue.
In implementation, the processing amount of the brushing operation task can be adjusted according to the service processing speed of the mechanical disk, because the brushing operation task does not need to apply for processing to the mechanical disk immediately and can wait for brushing in the cache device.
The threshold value of the data amount of the back flush (represented by wb_thre) is dynamically adjusted by Qos (Quality of Service ) mechanism, and the current data amount of the back flush threshold value is obtained according to the service processing speed of the mechanical disk. When the processing speed of the mechanical disk service is higher, a plurality of brushing operation tasks can be added for processing, the threshold value of the brushing data quantity is increased, when the processing speed of the mechanical disk service is lower, a plurality of brushing operation tasks can be added, and the threshold value of the brushing data quantity is reduced, so that the processing pressure of the mechanical disk is reduced.
The specific adjustment process of the threshold value of the back-flushing data amount can be as follows: based on the Qos adjustment mechanism, the processing speed of the mechanical disc is monitored in real time, when the processing speed of the mechanical disc is too slow, the number of processing tokens (token) existing in the mechanical disc is reduced, and when the computer device monitors that the number of token is reduced, the threshold value of the data amount to be brushed can be reduced (proportionally reduced) until the threshold value of the data amount to be brushed reaches a lower limit threshold value. Thus, by reducing the threshold amount of the data to be brushed, the amount of data to be brushed for the task of the operation can be reduced, thereby adjusting the mechanical disk processing speed. When the threshold value of the data quantity of the back flushing reaches the lower limit threshold value, the threshold value of the data quantity of the back flushing is maintained unchanged at the lower limit threshold value even if the total number of the tokens is reduced, so that the timeliness of processing the back flushing operation task is ensured. When the mechanical disk processing speed is high, the total number of processing tokens (token) existing in the mechanical disk is slowly increased, and when the computer device detects that the number of token is increased, the data amount threshold of the back flush can be increased (proportionally increased) until the data amount threshold of the back flush reaches the upper limit threshold. Thus, by increasing the threshold amount of the data to be brushed, the data amount of the brushing operation task can be increased, thereby adjusting the mechanical disk processing speed. And when the threshold value of the data quantity of the back flushing reaches the upper limit threshold value, the threshold value of the data quantity of the back flushing is still kept unchanged even if the total number of the tokens is increased, so that the back flushing operation tasks are prevented from being too many, and the processing of the front-end bypass tasks cannot be guaranteed.
The computer equipment acquires a brushing data amount threshold value determined under the current mechanical disk processing speed, and adds brushing operation tasks with the number equal to the current brushing data amount threshold value into the brushing operation queue by taking the current brushing data amount threshold value as a basis for adding the data amount of the brushing operation tasks into the brushing operation queue so as to perform aggregation sequencing processing on the business operation tasks in the writing operation queue, the reading operation queue and the brushing operation queue.
In summary, in the overall design of the embodiment of the present application, the total number of tokens is relatively fast reduced and slowly increased, and meanwhile, the data quantity threshold wb_thre of the back-flushing operation task is changed in the same proportion within the threshold range, and compared with the unconditional addition of the front-end bypass read/write operation task to the corresponding read/write service operation queue, the back-flushing operation task needs to rely on wb_thre to determine the delivered data quantity. wb _ thre cannot be too large or otherwise preempt the bandwidth of the front-end traffic. And meanwhile, the wb_thre cannot be too small, so that the cache device is prevented from being occupied by cache data, and a cache mechanism cannot be provided. When the total number of the token is increased, more IO service operation tasks are delivered to the rear end, and in order to avoid the condition of delivering IO queuing, the rate of increasing the number of the token is slow. the token number is reduced when the current mechanical disk is in a slow processing state, and at this time, the quick reduction is required to reduce the load of the mechanical disk.
In one embodiment, as shown in FIG. 4, the specific process of step 104 includes:
Step 402, according to the service address number carried by the service operation task, sorting the service operation task, determining the processing sequence of all the service operation tasks, and adding the service operation task into the mechanical disc delivery thread queue according to the processing sequence.
In implementation, in a conventional process of processing service operation tasks, the computer device reads, writes or rewinds the service operation tasks to the mechanical disk according to service operation time of each service operation task and priorities corresponding to service operation types. However, in this manner, since the priorities corresponding to the service operation types of the front-end read operation and the write operation are higher than the priorities corresponding to the callback service operation types, the processing sequence of the callback operation tasks is interrupted by the front-end read operation task and the write operation task. As shown in fig. 5, if the list of the swiping tasks (wb_list) includes the swiping tasks wb_0, wb_1, and wb_2, the front-end bypass task includes a bypass_0 operation task (the bypass task is a read or write operation task). The service operation time of bypass_0 is before wb_2, and because the priority corresponding to the bypass service operation type is higher than the priority corresponding to the wb service operation type, bypass_0 may disturb the processing sequence of the callback operation task. That is, as shown in fig. 5, the computer apparatus swiches the swiping operation task wb_0 to the position t0_s0 (head position 1) in the mechanical disk, wb_1 to the position t0_s1 (head position 2) in the mechanical disk, then the bypass_0 reads or writes the position t0_s100 (head position 3) in the mechanical disk, and finally, the swiping operation task wb_2 is processed again to be swished to the mechanical disk t0_s2. Because of the addition of bypass service operation tasks, the blank swing time of the mechanical disc magnetic head is long (namely, the time from t0_s1 to t0_s100 to the time of returning to t0_s2), and the service operation tasks have lower processing efficiency. Therefore, after the computer equipment acquires all the business operation tasks, the business operation tasks are reordered based on the business address numbers, and after the processing sequence of all the business operation tasks is determined, the business operation tasks are added into a mechanical disc delivery thread queue (hdd_queue) according to the processing sequence.
Specifically, each business operation task needs to operate at a designated location on the mechanical disc, and thus, each business operation task carries a corresponding business address number (e.g., the location "t0_s1"). And then, the computer equipment performs sequencing processing on the plurality of business operation tasks according to the business address numbers carried by the business operation tasks, and determines the processing sequence of all the re-planned business operation tasks.
And step 404, performing aggregation processing on each business operation task contained in the mechanical disc delivery thread queue to obtain a sequence list corresponding to each aggregation processing result.
In implementation, for the determined processing sequence of all service operation tasks, the computer device performs aggregation processing on each service operation task included in the mechanical disc delivery thread queue according to a preset aggregation algorithm, so as to obtain a sequence list (seq_list) corresponding to each aggregation processing result.
Optionally, the preset aggregation algorithm may be an aggregation algorithm performed with a preset distance threshold for a service address (characterized as a service address number) corresponding to each service operation task, or a cluster analysis algorithm performed based on a preset service address (characterized as a service address number) as a center, which is not limited in the embodiment of the present application.
In this embodiment, by adding the service operation tasks of different service operation types to different service operation queues, each service flow independently shares a service operation queue under the condition that locking operation must be performed on each service flow in the operation queue, so as to reduce granularity of locks in the operation queue. Meanwhile, the business operation tasks with different business operation types are respectively added to different business operation queues, so that business processes of the business operation tasks with different types are mutually independent, the processing time of each queue is shortened, and the overall processing performance of the mechanical disk is improved.
In one embodiment, as shown in FIG. 6, the specific process of step 106 includes:
Step 602, performing aggregation processing on all the service operation tasks based on the processing sequence of all the service operation tasks contained in the mechanical disc delivery thread queue, so as to obtain a plurality of aggregation processing results.
In implementation, the computer device performs aggregation processing on all service operation tasks contained in a mechanical disk delivery thread queue (hdd_queue) according to a processing sequence, so as to obtain a plurality of aggregation processing results. Specifically, because the number of service operation tasks included in the mechanical disc delivery thread queue is larger, service operation addresses corresponding to each service operation task may still be unevenly distributed.
For example, let r denote bypass read (front-end read operation task), w denote bypass write (front-end write operation task), wb denote writeback (back-flush operation task). Sequencing and marking the business operation tasks according to the processing sequence, and then comprises the following steps: the address numbers of the service operation addresses on the mechanical disk corresponding to the service operation tasks are respectively: t1_s1, t2_s2, t3_s8, t4_s20, t5_s22. The "tn" in the service address number indicates a processing sequence of the service operation task, that is, the 1 st service operation task corresponds to "t1", the 2 nd service operation task corresponds to "t2", … …, and the n-th service operation task corresponds to "tn". The "sn" in the service address number is denoted as a location number, for example, "s1" in the service address number of the 1 st read operation task indicates the first location in the mechanical disc of the service operation address in the mechanical disc. Therefore, for the 5 service operation tasks in the mechanical disc delivery thread queue, as can be seen from the service address numbers corresponding to the service operation tasks, the position distance between the 1 st read operation task and the 2 nd read operation task is closer, the position distance between the 4 th write operation task and the 5 th brush operation task is closer, and the write operation address of the 3 rd write operation task on the mechanical disc is relatively far away from other service operation tasks. Therefore, for each service operation task in the same mechanical disk delivery thread queue, aggregation processing is further needed for each service operation task, and each service operation task is further divided to obtain a plurality of aggregation results.
In step 604, the service operation task included in each aggregate processing result is determined as a sequential list, so as to obtain a plurality of sequential lists.
In implementation, the computer device determines the service operation tasks included in each aggregation processing result as a sequential list, where the service operation tasks included in each sequential list have a shorter service operation time interval and a closer address distance of the service operation in the mechanical disk relative to other service operation tasks outside the sequential list.
As shown in fig. 7, for all the service operation tasks included in the mechanical disc delivery thread queue, the corresponding 4 order lists are obtained after aggregation processing, where the order lists are respectively: seq_list0, seq_list1, seq_list2 and seq_list3, wherein seq_list0 contains io1_r, io3_w, io7_w and io11_w; the seq_lis1 contains the Ic2_r, the Ic4_wb and the Ic5_w; the seq_lis2 contains the Ic6_w, the Ic8_w and the Ic12_w; seq_list3 contains iO9_wb, iO10_wb. (in the embodiment of the present application, "IO" or "IO" in FIG. 6 represent the same meaning, and the differences in writing modes are not distinguished).
In this embodiment, by performing aggregation processing on all service operation tasks after the sequencing processing in the mechanical disc delivery thread queue, a plurality of aggregation processing results are obtained, so that service operation tasks included in each aggregation processing result have service operation compactness, the swing time of the magnetic head is reduced, and the service processing performance of the mechanical disc is improved.
In one embodiment, as shown in fig. 8, for a plurality of aggregate processing results obtained after aggregation in the machine disk delivery thread queue, each aggregate processing result is taken as a sequence list, so as to obtain a plurality of corresponding sequence lists, and then the computer device determines the processing priority of each sequence list in the machine disk respectively, and sequentially processes each sequence list in the order of the processing priority from large to small. After step 104, before step 106, the method further comprises:
Step 802, judging whether a time-out sequence list exists in each sequence list according to the operation time information corresponding to the sequence list.
In practice, the computer device first uses the operation time information of the sequential list as a criterion to determine whether there is a time-out sequential list in each sequential list.
In an alternative embodiment, the operation time information of each sequence list takes the earliest service operation time information in the service operation time information of all the service operation tasks in the sequence list as the service operation time of the list in the sequence, so as to ensure the processing timeliness of the service operation tasks needing earliest processing in each sequence list.
Step 804, if there is no overtime sequence list, determining the processing priority of each sequence list according to the continuity, service priority and operation time information corresponding to each sequence list.
In implementation, when the current time is relative to the operation time information corresponding to each sequence list, the current time does not exceed the operation time information of each sequence list, or when the timeout time corresponding to the current time compared with the operation time information of each sequence list is within a preset timeout time range, no timeout sequence list exists, and further, the sequence list can be screened for multiple dimensions, namely, the processing priority of each sequence list is determined according to the characteristics of three dimensions of continuity, service priority and operation time information corresponding to each sequence list. As shown in fig. 7, the service priority (prio) of the sequential list seq_list0 is r, the continuity (seq) is 4, and the operation time (time) is 2. The service priority (prio) of the sequential list seq_list1 is r, the continuity (seq) is 3, and the operation time (time) is 3. The service priority (prio) of the sequential list seq_lis2 is w, the continuity (seq) is 3, and the operation time (time) is 4. The service priority (prio) of the sequential list seq_list3 is wb, the continuity (seq) is 2, and the operation time (time) is 1.
The method for determining the features of the three dimensions, namely the continuity, the service priority and the operation time information, corresponding to each sequence list will be described in detail later in the embodiment of the present application, and will not be described in detail here.
Step 806, determining a target sequence list with highest processing priority from the sequence lists, and delivering the target sequence list to the mechanical disk.
In implementation, for the processing priority determined by each sequence list, the computer device determines, from among the sequence lists, a sequence list with the highest processing priority in the current round (i.e., each sequence list currently included) as a target sequence list (subset_list), and performs priority processing on the target sequence list, i.e., the computer device delivers service operation tasks (IOs) included in the target sequence list to the machine disc.
In this embodiment, the processing priority of the multiple sequential lists after aggregation processing is determined, so as to determine the processing sequence of the multiple sequential lists, and the computer device processes the sequential lists in turn according to the determined processing sequence, so that service operation time of service operation tasks included in each sequential processing list is tighter, service address distances are similar, and further, service processing efficiency of the mechanical disk is improved.
In one embodiment, for another case in the above embodiment, the method further includes:
if the overtime sequence list exists, the overtime sequence list is used as a target sequence list, and the delivery is carried out to the mechanical disk.
In practice, if there is a sequence list that has timed out for the current time, the timed out sequence list needs to be preferentially processed, that is, the sequence list is taken as a target sequence list and delivered to the mechanical disk, so as to process the business operation task in the target sequence list.
In this embodiment, the operation time information of the sequence list is used as a judging condition, that is, the earliest service operation time corresponding to each service operation task in the sequence list is judged, if the service operation time is overtime, the service operation task with overtime in the sequence list is represented, and then the overtime sequence list is used as a target sequence list to process the sequence list preferentially, so as to reduce the processing delay of each service operation task in the sequence list and improve the timeliness of front-end service processing.
In one embodiment, as shown in fig. 9, the present application provides a manner of determining three feature dimensions, namely, continuity, service priority and operation time information, corresponding to the sequence list, and further, based on the determined feature values of the three feature dimensions of each sequence list, the computer device may determine the processing priority of each sequence list, so as to obtain the processing order of the sequence lists based on the processing priority, that is, the specific processing procedure of step 704 includes:
and step 902, counting the size of the storage space of the mechanical disk to be operated of all the business operation tasks in each sequence list, and taking the size of the storage space of the mechanical disk to be operated of the business operation tasks as the continuity of the sequence list.
In implementation, the computer device counts the size of the mechanical disk storage space to be operated of all the service operation tasks in each sequence list, and takes the size of the mechanical disk storage space to be operated of the service operation tasks as the continuity of the sequence list. Thus, the 4 sequential lists, as determined in FIG. 6, the continuity of each sequential list may be obtained by statistical calculations by the computer device: the continuity of seq_list0 is 4, the continuity of seq_list1 is 3, the continuity of seq_list2 is 3 and the continuity of seq_list0 is 2.
Step 904, taking the priority corresponding to the service operation type of the first target service operation task in each sequence list as the service priority of the sequence list.
The first target business operation task is a business operation task with the highest business operation type priority in the sequence list.
In implementation, the computer device uses the priority corresponding to the service operation type of the first target service operation task in each sequence list as the service priority of the sequence list.
Specifically, the business operation tasks correspond to different business operation types, such as a read operation type, a write operation type, and a flush operation type. For different service operation types, the operation type priorities corresponding to the service operation types are preset, for example, the priorities of the service operation types are as follows: the bypass r_io (read traffic type of bypass (cache device)) has the highest priority, bypass w_io (write traffic type of bypass (cache device)) has the highest priority, and writeback_io has the lowest priority. Further, for each sequence list, the service operation types corresponding to all the service operation tasks contained in the sequence list are identified, the priority of the highest service operation type is determined, and the priority of the highest service operation type is used as the service priority of the sequence list.
For example, if the sequence list only includes two service operation tasks of bypass w_io and writeback_io, the service priority of the sequence list is the highest service operation type priority thereof: bypass w_IO. If the sequence list contains three service operation tasks of bypass r_IO, bypass w_IO and writeback_IO, the service priority of the sequence list is the highest service operation type priority: bypass r_IO.
Step 906, taking the service operation time corresponding to the second target service operation task in each sequence list as the operation time information of the sequence list.
The second target business operation task is the business operation task with the earliest operation time in the sequence list.
In implementation, the computer device uses the service operation time corresponding to the second target service operation task in each sequence list as operation time information of the sequence list.
Specifically, the computer device uses the earliest service operation time corresponding to each service operation task included in the sequence list as the operation time basis of the sequence list, identifies the service operation task with the earliest operation time included in each sequence list, uses the service operation task as a second target service operation task, and uses the service operation time of the second target service operation task as the operation time information of the sequence list.
Step 908, determining the processing priority of each sequential list according to the continuity, service priority, operation time information and preset priority algorithm corresponding to each sequential list.
In implementation, the computer device determines the processing priority of each sequential list according to the features of the three dimensions, namely the continuity, the service priority and the operation time information, determined by each sequential list, and by combining a preset priority algorithm. The preset priority algorithm is as follows: continuity traffic priority/queuing time; wherein the queuing time is the current time (current time) minus the operating time information.
Further, the computer device may determine the processing order of the respective sequence lists based on the determined magnitude of the processing priority of the sequence list, such that each sequence list is sequentially executed according to the determined processing order.
In this embodiment, the processing priority of each sequential list is determined by determining the continuity, the service priority and the operation time information corresponding to each sequential list, so that the computer device may process each sequential list based on the processing priority, and timely feedback the processing information, thereby improving the front-end service processing performance and the mechanical disk processing efficiency.
In one embodiment, as shown in FIG. 10, the specific process of step 108 includes the steps of:
step 1002, it is determined whether a processing token exists for the current mechanical disk delivery thread.
In implementations, a computer device determines whether a processing token exists for a current mechanical disk delivery thread. Specifically, a dynamic number of processing tokens (token) exist in the mechanical disc delivery thread, according to the processing token delivery mechanism, the mechanical disc delivery thread can control the number of storage objects (i.e. service operation tasks IO to be processed) delivered to the mechanical disc for processing in unit time, namely when the processing tokens exist in the mechanical disc delivery thread, the IO requests of the service operation tasks can be processed, so that the computer equipment firstly judges whether the processing tokens exist in the current mechanical disc delivery thread.
Step 1004, if the processing token exists, acquiring the processing token, and processing the current business operation task to be processed in the sequence list.
In implementation, if the processing token exists, the processing number of the current service operation task of the mechanical disk does not reach the upper limit value, so the computer equipment obtains the processing token based on the processing token mechanism in the mechanical disk, and processes the current service operation task according to the processing token and the current service operation task.
Step 1006, judging whether the service operation task to be processed exists in the sequence list, if so, executing the step of judging whether the processing token exists in the current mechanical disc delivery thread or not until all the service operation tasks to be processed in the sequence list are processed.
In implementation, the computer device determines whether the service operation task to be processed exists in the sequence list, and if so, executes step 1002 above until all the service operation tasks to be processed in the sequence list are processed.
Specifically, when each service operation task in the sequence list is processed, a corresponding processing token is needed, so if an unprocessed service operation task still exists in the sequence list and a processing token exists in the mechanical disc delivery thread, the processing token is acquired, and the mechanical disc delivery thread processes an unprocessed service operation task IO request according to the triggering of the processing token; if no processing token exists in the mechanical disc delivery thread at this time, service waiting is needed at this time until the mechanical disc processing completes other service operation tasks and releases the processing token, and the computer device can process the service operation tasks based on the newly released processing token. Based on the processing flow, the computer device sequentially processes each business operation task IO in the sequence list until the current sequence list is processed. Further, the computer device may further determine whether a sequence list to be processed exists in the current mechanical disk storage system, if the sequence list to be processed exists, according to the above processing method for the service operation task in the sequence list, the sequence list to be processed is processed, which is not described in detail in the embodiment of the present application until all the sequence lists in the mechanical disk storage system are processed.
In this embodiment, for a plurality of sequential lists obtained after aggregation processing, processing is performed on service operation tasks included in each sequential list according to a processing token mechanism in a mechanical disc, so as to improve processing performance of front-end service operation tasks.
In one embodiment, as shown in fig. 11, an example of a service processing method is provided, the method comprising:
Step 1101, obtaining a business operation task. Wherein, each business operation task corresponds to a business operation type, including: the business operation tasks include a write operation task, a read operation task and a brush operation task.
In step 1102, a write operation task and a read operation task are respectively added to the write operation queue and the read operation queue, a current swiping data amount threshold is obtained, and a swiping operation task equal to the current swiping data amount threshold is added to the swiping operation queue. The current threshold value of the data amount is obtained according to the service processing speed of the mechanical disk.
And step 1103, according to the service address numbers carried by the service operation tasks, sequencing all the service operation tasks in different service processing queues, and determining the processing sequence of all the service operation tasks. And adding the business operation tasks into the mechanical disk delivery thread queue according to the processing sequence.
And 1104, performing aggregation processing on each business operation task contained in the mechanical disc delivery thread queue to obtain a sequence list corresponding to each aggregation processing result.
In step 1105, a target order list is determined from the order lists.
Step 1106, it is determined whether a processing token exists in the current mechanical disc delivery thread, and if so, the service operation tasks included in the sequence list are sequentially processed according to the processing token delivered by the mechanical disc. Wherein the total number of processing tokens is also dynamically adjusted based on the traffic processing speed of the mechanical disk.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a service processing device for realizing the service processing method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in one or more embodiments of the service processing device provided below may refer to the limitation of the service processing method hereinabove, and will not be repeated herein.
In one embodiment, as shown in fig. 12, there is provided a service processing apparatus 1200, including: an acquisition module 1210, a determination module 1220, an aggregation module 1230, and a processing module 1140, wherein:
An obtaining module 1210, configured to obtain a service operation task; each business operation task corresponds to a business operation type;
The aggregation ordering module 1220 is configured to perform aggregation ordering processing on the service operation task according to a service address number carried by the service operation task, so as to obtain a sequence list corresponding to each aggregation processing result;
And the processing module 1230 is configured to process the service operation tasks included in the sequence list according to the processing tokens delivered by the mechanical disc.
By adopting the device 1200, the service processing sequence of the mechanical disk is re-planned by sequencing and aggregating various service operation tasks, the time that the magnetic head is greatly swung and does not work is reduced, and the service processing efficiency is improved.
In one embodiment, the business operation type includes a write operation type, a read operation type, and a flush operation type, and the business operation task includes a write operation task, a read operation task, and a flush operation task; the aggregation ordering module 1220 is specifically configured to add a write operation task and a read operation task to the write operation queue and the read operation queue, respectively;
Acquiring a current brushing data amount threshold value, and adding a brushing operation task which is equal to the current brushing data amount threshold value into a brushing operation queue; the current threshold value of the amount of data to be flushed is obtained according to the service processing speed of the mechanical disk.
In one embodiment, the aggregation ordering module 1220 is specifically configured to order the service operation tasks according to service address numbers carried by the service operation tasks, determine a processing sequence of all the service operation tasks, and add the service operation tasks to a mechanical disc delivery thread queue according to the processing sequence;
And carrying out aggregation processing on each business operation task contained in the mechanical disc delivery thread queue to obtain a sequence list corresponding to each aggregation processing result.
In one embodiment, the aggregation ordering module is specifically configured to aggregate all service operation tasks based on a processing sequence of all service operation tasks included in the mechanical disc delivery thread queue, so as to obtain a plurality of aggregation processing results;
And determining the business operation task contained in each aggregation processing result as a sequence list to obtain a plurality of sequence lists.
In one embodiment, the apparatus 1200 further comprises:
The judging module is used for judging whether a overtime sequence list exists in each sequence list according to the operation time information corresponding to the sequence list;
the determining module is used for determining the processing priority of each sequence list according to the continuity, the service priority and the operation time information corresponding to each sequence list if no overtime sequence list exists;
And the delivery module is used for determining a target sequence list with the highest processing priority from the sequence lists and delivering the target sequence list to the mechanical disc.
In one embodiment, the apparatus 1200 further comprises:
and the delivery module is also used for taking the overtime sequence list as a target sequence list and delivering to the mechanical disk if the overtime sequence list exists.
In one embodiment, each service operation task corresponds to a service operation type priority, and the determining module is specifically configured to count the size of a mechanical disk storage space to be operated by all service operation tasks in each sequence list, and take the size of the mechanical disk storage space to be operated by the service operation task as the continuity of the sequence list;
taking the priority corresponding to the service operation type of the first target service operation task in each sequence list as the service priority of the sequence list; the first target business operation task is a business operation task with the highest business operation type priority in the sequence list;
The service operation time corresponding to the second target service operation task in each sequence list is used as operation time information of the sequence list; the second target business operation task is a business operation task with earliest operation time in the sequence list;
And determining the processing priority of each sequence list according to the continuity, the service priority, the operation time information and the preset priority algorithm corresponding to each sequence list.
In one embodiment, the processing module 1230 is specifically configured to determine whether a processing token exists for the current mechanical disk delivery thread;
If the processing token exists, the processing token is obtained, and the current business operation task to be processed in the sequence list is processed;
judging whether the service operation task to be processed exists in the sequence list, if so, executing the step of judging whether the current mechanical disc delivery thread exists a processing token or not, until all the service operation tasks to be processed in the sequence list are processed.
The various modules in the service processing apparatus 1200 described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 13. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is for storing data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a business processing method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 13 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (11)

1. A method of service processing, the method comprising:
acquiring a business operation task; the service operation task comprises a writing operation task, a reading operation task and a brushing operation task, and a plurality of service queues of different types are established;
acquiring a brushing data amount threshold value determined based on a Qos mechanism at the current mechanical disk processing speed, and adding brushing operation tasks with the number equal to the current brushing data amount threshold value into a brushing operation queue by taking the current brushing data amount threshold value as a basis for adding the data amount of the brushing operation tasks into the brushing operation queue;
According to the service address number carried by the service operation task, carrying out aggregation sequencing processing on the service operation task based on a preset aggregation algorithm and according to the service address corresponding to each service operation task and a preset distance threshold value, and obtaining a sequence list corresponding to each aggregation processing result; or based on a preset service address center, carrying out aggregation sequencing treatment on each service operation task to obtain a sequence list corresponding to each aggregation treatment result;
and processing the business operation tasks contained in the sequence list in sequence according to the processing tokens held by the mechanical disc delivery threads.
2. The method of claim 1, wherein each of the service operation tasks corresponds to a service operation type, wherein the obtaining the threshold value of the amount of the data to be flushed, which is determined based on the Qos mechanism at the current processing speed of the mechanical disk, and using the current threshold value of the amount of the data to be flushed as a basis for adding the amount of the data to be flushed to the queue of the operation to be flushed, and adding the number of the data to be flushed equal to the threshold value of the current amount of the data to be flushed to the queue of the operation to be flushed comprises:
adding the write operation task and the read operation task to a write operation queue and a read operation queue respectively;
Acquiring a current brushing data amount threshold, and adding the brushing operation task which is equal to the current brushing data amount threshold into a brushing operation queue so as to perform aggregation sequencing treatment on the business operation tasks in the writing operation queue, the reading operation queue and the brushing operation queue; the current back-flushing data quantity threshold value is obtained according to the service processing speed of the mechanical disc.
3. The method according to claim 1 or 2, wherein the performing, according to the service address number carried by the service operation task, the aggregation ordering processing on the service operation task to obtain a sequential list corresponding to each aggregation processing result includes:
According to the service address numbers carried by the service operation tasks, sequencing the service operation tasks, determining the processing sequence of all the service operation tasks, and adding the service operation tasks into a mechanical disc delivery thread queue according to the processing sequence;
And carrying out aggregation processing on each business operation task contained in the mechanical disc delivery thread queue to obtain a sequence list corresponding to each aggregation processing result.
4. The method according to claim 1, wherein after the aggregating and sorting the service operation tasks according to the service address numbers carried by the service operation tasks to obtain a sequential list corresponding to each aggregate processing result, before the processing of the service operation tasks included in the sequential list in sequence according to the processing tokens held by the mechanical disc delivery thread, the method further comprises:
judging whether a overtime sequence list exists in each sequence list according to the operation time information corresponding to the sequence list;
if the overtime sequence list does not exist, determining the processing priority of each sequence list according to the corresponding continuity, service priority and operation time information of each sequence list;
And determining the target sequence list with the highest processing priority from the sequence lists, and delivering the target sequence list to the mechanical disc.
5. The method according to claim 4, wherein the method further comprises:
And if the overtime sequence list exists, the overtime sequence list is used as a target sequence list, and the delivery is carried out to the mechanical disc.
6. The method of claim 4, wherein each of the business operation tasks corresponds to a business operation type priority, wherein the determining the processing priority of each of the sequential lists according to the continuity, business priority and operation time information corresponding to each of the sequential lists comprises:
Counting the size of the storage space of the mechanical disc to be operated of all the business operation tasks in each sequence list, and taking the size of the storage space of the mechanical disc to be operated as the continuity of the sequence list;
Taking the priority corresponding to the service operation type of the first target service operation task in each sequence list as the service priority of the sequence list; the first target business operation task is a business operation task with the highest business operation type priority in the sequence list;
Taking the service operation time corresponding to the second target service operation task in each sequence list as operation time information of the sequence list; the second target business operation task is a business operation task with earliest operation time in the sequence list;
and determining the processing priority of each sequence list according to the continuity, the service priority, the operation time information and the preset priority algorithm corresponding to each sequence list.
7. The method according to claim 1, wherein the sequentially processing the business operation tasks included in the sequential list according to the processing tokens held by the mechanical disc delivery thread includes:
judging whether a processing token exists in the current mechanical disc delivery thread or not;
If the processing token exists, the processing token is obtained, and the current business operation task to be processed in the sequence list is processed;
judging whether the service operation task to be processed exists in the sequence list, if so, executing the step of judging whether the processing token exists in the current mechanical disc delivery thread or not until all the service operation tasks to be processed in the sequence list are processed.
8. A service processing apparatus, the apparatus comprising:
The acquisition module is used for acquiring a business operation task; each business operation task corresponds to a business operation type;
The aggregation ordering module is used for acquiring a brushing data amount threshold value determined based on a Qos mechanism at the current mechanical disk processing speed, taking the current brushing data amount threshold value as a basis for adding the data amount of the brushing operation tasks into a brushing operation queue, and adding the brushing operation tasks which are equal to the current brushing data amount threshold value into the brushing operation queue;
The aggregation ordering module is further used for carrying out aggregation ordering processing on the service operation tasks according to service address numbers carried by the service operation tasks and based on a preset aggregation algorithm and service addresses corresponding to the service operation tasks and a preset distance threshold value, so as to obtain a sequence list corresponding to each aggregation processing result; or based on a preset service address center, carrying out aggregation sequencing treatment on each service operation task to obtain a sequence list corresponding to each aggregation treatment result;
and the processing module is used for processing the business operation tasks contained in the sequence list according to the processing tokens delivered by the mechanical disc.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
11. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202111643867.5A 2021-12-29 2021-12-29 Service processing method, device, computer equipment and storage medium Active CN114461139B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111643867.5A CN114461139B (en) 2021-12-29 2021-12-29 Service processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111643867.5A CN114461139B (en) 2021-12-29 2021-12-29 Service processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114461139A CN114461139A (en) 2022-05-10
CN114461139B true CN114461139B (en) 2024-07-09

Family

ID=81407505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111643867.5A Active CN114461139B (en) 2021-12-29 2021-12-29 Service processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114461139B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6567886B1 (en) * 1999-06-30 2003-05-20 International Business Machines Corporation Disk drive apparatus and control method thereof
CN106843765A (en) * 2017-01-22 2017-06-13 郑州云海信息技术有限公司 A kind of disk management method and device
US9959052B1 (en) * 2015-09-17 2018-05-01 Western Digital Technologies, Inc. Media based cache for data storage device

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1828532A (en) * 2006-04-17 2006-09-06 莱瑞世(北京)软件有限公司 Method for transforming operation procedure to executing language
JP2009087460A (en) * 2007-09-28 2009-04-23 Toshiba Corp Command processing method for disk storage device
JP6039345B2 (en) * 2012-10-05 2016-12-07 キヤノン株式会社 Image management apparatus, image management method, and program
US9384142B2 (en) * 2014-09-16 2016-07-05 International Business Machines Corporation Efficient and consistent para-virtual I/O system
CN107305473B (en) * 2016-04-21 2019-11-12 华为技术有限公司 A kind of dispatching method and device of I/O request
CN111427515A (en) * 2020-03-27 2020-07-17 杭州宏杉科技股份有限公司 RAID reconstruction method and device
CN111949392A (en) * 2020-08-27 2020-11-17 苏州浪潮智能科技有限公司 Cache task queue scheduling method, system, terminal and storage medium
CN111984200B (en) * 2020-08-28 2024-04-12 大连大学 Mass video data storage system
CN113434296B (en) * 2021-07-01 2023-03-21 曙光信息产业股份有限公司 Cache back-brushing method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6567886B1 (en) * 1999-06-30 2003-05-20 International Business Machines Corporation Disk drive apparatus and control method thereof
US9959052B1 (en) * 2015-09-17 2018-05-01 Western Digital Technologies, Inc. Media based cache for data storage device
CN106843765A (en) * 2017-01-22 2017-06-13 郑州云海信息技术有限公司 A kind of disk management method and device

Also Published As

Publication number Publication date
CN114461139A (en) 2022-05-10

Similar Documents

Publication Publication Date Title
US6301639B1 (en) Method and system for ordering priority commands on a commodity disk drive
US7698602B2 (en) Systems, methods and computer products for trace capability per work unit
DE102012216568B4 (en) Scheduling and managing compute tasks with different execution priority levels
US7680848B2 (en) Reliable and scalable multi-tenant asynchronous processing
CN102541460B (en) Multiple disc management method and equipment
US20190050255A1 (en) Devices, systems, and methods for lockless distributed object input/output
US10884667B2 (en) Storage controller and IO request processing method
JP2007199811A (en) Program control method, computer and program control program
US20170123975A1 (en) Centralized distributed systems and methods for managing operations
CN107122130A (en) A kind of data delete method and device again
US20130198478A1 (en) Resources allocation in a computer storage system
US8478952B1 (en) Flexible optimized group-based backups
CN110321364B (en) Transaction data query method, device and terminal of credit card management system
CN100419692C (en) Implementation management of the management program in the information processing device
CN108958643A (en) Data storage device and its operating method
CN114461139B (en) Service processing method, device, computer equipment and storage medium
US11481140B1 (en) Dynamic base disk mirroring for linked clones
US6643735B2 (en) Integrated RAID system with the capability of selecting between software and hardware RAID
CN111708812A (en) Distributed data processing method
CN115794446B (en) Message processing method and device, electronic equipment and storage medium
CN115357352A (en) Distributed asynchronous task scheduling method and device, computer equipment and storage medium
CN111488222B (en) Stream aggregation method and device and electronic equipment
JP2020503613A (en) Storage controller and IO request processing method
CN112748883A (en) IO request pipeline processing device, method, system and storage medium
CN111405313A (en) Method and system for storing streaming media data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant