CN115543214B - Data storage method, device, equipment and medium in low-delay scene - Google Patents
Data storage method, device, equipment and medium in low-delay scene Download PDFInfo
- Publication number
- CN115543214B CN115543214B CN202211487806.9A CN202211487806A CN115543214B CN 115543214 B CN115543214 B CN 115543214B CN 202211487806 A CN202211487806 A CN 202211487806A CN 115543214 B CN115543214 B CN 115543214B
- Authority
- CN
- China
- Prior art keywords
- memory queue
- message data
- data
- disk
- index object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0674—Disk device
- G06F3/0676—Magnetic disk device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/061—Improving I/O performance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0638—Organizing or formatting or addressing of data
- G06F3/0643—Management of files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0652—Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to the technical field of big data, and provides a data storage method, a device, equipment and a medium under a low-delay scene, wherein the method comprises the following steps: message data under a low-delay scene are added to a log file and written into a disk; acquiring each message data in the log file, and generating an index object according to each message data; storing the index object to a memory queue; detecting whether the memory queue is full, and detecting a time interval from the last time of transferring the index object in the memory queue to the disk; and when the memory queue and/or the time interval reach configuration conditions, transferring the index object in the memory queue from the memory queue to the disk, and deleting the index object transferred to the disk from the memory queue. The invention can realize high-speed storage of data in a low-delay scene without independent deployment and maintenance.
Description
Technical Field
The invention relates to the technical field of big data, in particular to a data storage method, a data storage device, data storage equipment and a data storage medium in a low-delay scene.
Background
The storage engine is the software component of the database bottom layer for storing data, reading data, modifying data, and deleting data. At present, various databases are arranged on the market, each database has a storage engine belonging to the database, but the storage engines are bound with the databases and provide a plurality of complex functions, and the databases cannot be used as a storage solution independently, so that the storage efficiency is influenced. Moreover, since the database is an independent component, it also needs to be deployed and maintained separately.
Particularly for low-delay scenes, such as financial scenes, in these scenes, data needs to be written in order, only additional writing is needed, deletion and modification of the data are not involved, and the retrieval data is only in a continuous interval and does not have random access to the data, so that the requirement on storage efficiency is high.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a data storage method, device, apparatus and medium in a low latency scenario, which aims to solve the problem of low data storage efficiency in the low latency scenario.
A data storage method under a low-delay scene comprises the following steps:
when message data under a low-delay scene is received, the message data is added to a log file, and the log file is written into a disk;
acquiring each message data in the log file, and generating an index object according to each message data;
storing the index object to a memory queue;
detecting whether the memory queue is full, and detecting a time interval from the last time of transferring the index object in the memory queue to the disk;
and when the memory queue and/or the time interval reach configuration conditions, transferring the index object in the memory queue from the memory queue to the disk, and deleting the index object transferred to the disk from the memory queue.
According to a preferred embodiment of the present invention, said appending the message data to a log file comprises:
acquiring the file volume of each log file;
writing the message data into at least one log file in sequence according to the time sequence and the file volume of each log file;
and sequentially configuring the sequence numbers of the message data according to the writing sequence of the message data, and naming each log file according to the sequence number interval of the message data contained in each log file.
According to a preferred embodiment of the present invention, the generating the index object according to each message data includes:
acquiring the storage address of each message data in the disk;
generating an index object of each message data according to the storage address of each message data in the disk;
and the sequence number of each index object is sequentially configured according to the generation sequence of each index object, and each index object is named according to the sequence number of each index object.
According to a preferred embodiment of the present invention, the storing the index object to the memory queue includes:
acquiring message attributes of message data corresponding to each index object;
acquiring a preset writing rule;
and writing the index object into the memory queue according to the writing rule and the message attribute.
According to a preferred embodiment of the present invention, before the index object in the memory queue is transferred from the memory queue to the disk, the method further includes:
acquiring a pre-configured time threshold;
comparing the time interval to the time threshold;
and when the memory queue is full, and/or the time interval reaches the time threshold value, and an index object in the memory queue is not transferred to the disk, determining that the memory queue and/or the time interval reaches the configuration condition.
According to a preferred embodiment of the present invention, after the index object in the memory queue is transferred from the memory queue to the disk, the method further includes:
when a data query request is received, determining message data to be queried according to the data query request;
acquiring an index object of the message data to be inquired;
determining the storage address of the message data to be inquired in the disk according to the index object of the message data to be inquired;
and reading message data from the storage address of the message data to be queried in the disk as a response result of the data query request.
According to a preferred embodiment of the invention, the method further comprises:
creating a message query interface and a message write-in interface;
the message query interface is called to perform data query, and the message write interface is called to perform data write.
A data storage device in a low latency scenario, the data storage device in the low latency scenario comprising:
the write-in unit is used for adding the message data to a log file when the message data in a low-delay scene is received, and writing the log file into a disk;
the generating unit is used for acquiring each message data in the log file and generating an index object according to each message data;
the storage unit is used for storing the index object to a memory queue;
the detection unit is used for detecting whether the memory queue is full and detecting a time interval from the last time of transferring the index object in the memory queue to the disk;
and the unloading unit is used for unloading the index object in the memory queue from the memory queue to the disk and deleting the index object unloaded to the disk from the memory queue when the memory queue and/or the time interval reach configuration conditions.
A computer device, the computer device comprising:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the data storage method in the low-latency scene.
A computer-readable storage medium having stored therein at least one instruction for execution by a processor in a computer device to implement a data storage method in the low latency scenario.
According to the technical scheme, the data can be stored at high speed in a low-delay scene, and independent deployment and maintenance are not needed.
Drawings
FIG. 1 is a flow chart of a data storage method in a low latency scenario according to a preferred embodiment of the present invention.
FIG. 2 is a functional block diagram of a data storage device in a low latency scenario according to a preferred embodiment of the present invention.
Fig. 3 is a schematic structural diagram of a computer device according to a preferred embodiment of the present invention, which implements the data storage method in a low latency scenario.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific embodiments.
Fig. 1 is a flow chart of a data storage method in a low latency scenario according to a preferred embodiment of the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs.
The data storage method under the low-latency scene is applied to one or more computer devices, which are devices capable of automatically performing numerical calculation and/or information processing according to preset or stored instructions, and the hardware of the computer devices includes but is not limited to a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device may be any electronic product capable of performing human-computer interaction with a user, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game machine, an interactive web Television (IPTV), an intelligent wearable device, and the like.
The computer device may also include a network device and/or a user device. The network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network servers.
The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
The Network in which the computer device is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
S10, when message data under a low-delay scene are received, the message data are added to a log file (namely, a log file), and the log file is written into a disk.
In this embodiment, the low-latency scenario refers to a scenario with a high latency requirement, such as issuing transaction data in a financial scenario.
In this embodiment, the appending the message data to the log file includes:
acquiring the file volume of each log file;
writing the message data into at least one log file in sequence according to the time sequence and the file volume of each log file;
and the sequence numbers of the message data are sequentially configured according to the writing sequence of the message data, and each log file is named according to the sequence number interval of the message data contained in each log file.
For example: 100 pieces of message data can be written in each log file, the sequence number interval of the first log file can be 001-100, when the first log file is full, the second log file starts to write from 101 until the 200 th message data is written, and so on.
Different from a common log, the embodiment names each log file by using the sequence number interval of each log file, can directly determine the starting sequence number and the ending sequence number of the data stored in each log file according to the file name of the log file, and further determines the data stored in each log file, thereby facilitating subsequent query and use. For example: when data with the sequence number of 001-100 needs to be inquired, the log file named by the sequence number of 001-100 can be directly read without acquiring all files, so that the required file can be conveniently and quickly acquired, and the influence on the analysis and processing efficiency caused by too large read file is avoided.
In this embodiment, according to the characteristic that the log file is written in the disk in sequence, the log file is written in the disk in time sequence to form larger ordered data, and since the message data is written in the disk first, data loss caused by abnormal conditions such as system crash or network crash can be effectively avoided.
S11, acquiring each message data in the log file, and generating an index object (index) according to each message data.
In this embodiment, the generating an index object according to each message data includes:
acquiring the storage address of each message data in the disk;
generating an index object of each message data according to the storage address of each message data in the disk;
and the sequence number of each index object is sequentially configured according to the generation sequence of each index object, and each index object is named according to the sequence number of each index object.
In the above embodiment, the manner in which the sequence number of each index object is sequentially configured according to the generation sequence of each index object, and each index object is named according to the sequence number of each index object is similar to the manner in which the sequence numbers of the message data are sequentially configured according to the writing sequence of the message data, and each log file is named according to the sequence number interval of the message data included in each log file, which is not described herein again.
For example: the structure of the index object may be: index1| data1_ pos | index2| data2_ pos | index3| data3_ pos.; the sequence number of each index object is sequentially increased according to the generation sequence, and is respectively 1, 2 and 3, and so on.
And S12, storing the index object to a memory queue.
It can be understood that, since the index may be indexed according to a time sequence, or may be indexed according to other attributes such as a theme, the index object needs to be written into the memory first.
Specifically, the storing the index object to the memory queue includes:
acquiring message attributes of message data corresponding to each index object;
acquiring a preset writing rule;
and writing the index object into the memory queue according to the writing rule and the message attribute.
Wherein the message attributes may include, but are not limited to: generation time, topic type, etc.
Wherein the writing rules may include, but are not limited to: write by time, write by topic type, etc.
By generating the index object according to each message data, the efficiency of data query can be improved.
S13, detecting whether the memory queue is full, and detecting the time interval from the last time of transferring the index object in the memory queue to the disk.
S14, when the memory queue and/or the time interval reach configuration conditions, the index objects in the memory queue are transferred from the memory queue to the disk, and the index objects transferred to the disk are deleted from the memory queue.
In this embodiment, before the index object in the memory queue is transferred from the memory queue to the disk, it is further required to detect whether the memory queue and/or the time interval reach the configuration condition.
Specifically, before the index object in the memory queue is transferred from the memory queue to the disk, the method further includes:
acquiring a pre-configured time threshold;
comparing the time interval to the time threshold;
and when the memory queue is full, and/or the time interval reaches the time threshold value, and an index object in the memory queue is not transferred to the disk, determining that the memory queue and/or the time interval reaches the configuration condition.
For example: when the index object in the memory queue reaches 8192, that is, the memory queue is full, and/or 3 minutes has elapsed since the last time the index object in the memory queue was transferred from the memory queue to the disk, at this time, no matter whether the index object in the memory queue reaches 8192, as long as the memory queue has the index object, it is determined that the memory queue and/or the time interval reach the configuration condition.
Further, the index object in the memory queue is transferred from the memory queue to the disk, and the index object transferred to the disk is deleted from the memory queue, so that persistence of message data is achieved, and unlike database storage, independent deployment and maintenance are required, and other events at the bottom layer need to be processed.
In this embodiment, after the index object in the memory queue is transferred from the memory queue to the disk, the method further includes:
when a data query request is received, determining message data to be queried according to the data query request;
acquiring an index object of the message data to be queried;
determining the storage address of the message data to be inquired in the disk according to the index object of the message data to be inquired;
and reading message data from the storage address of the message data to be queried in the disk as a response result of the data query request.
Through the embodiment, the data can be efficiently queried based on the index object.
In this embodiment, the method further includes:
creating a message query interface and a message write-in interface;
the message query interface is called to perform data query, and the message write interface is called to perform data write.
In the embodiment, by creating the message query interface and the message write-in interface which are well encapsulated, data can be directly written in or queried by calling the corresponding interfaces in a program, so that the complexity of data reading and writing operations is reduced, the efficiency of data writing can be further improved by combining the characteristic of sequential writing of a disk, binary search is realized by combining an index object, and the reading efficiency and the querying efficiency of the data can be further improved.
According to the technical scheme, the data can be stored at high speed in a low-delay scene, and independent deployment and maintenance are not needed.
Fig. 2 is a functional block diagram of a data storage device in a low latency scenario according to a preferred embodiment of the present invention. The data storage device 11 in the low latency scenario includes a writing unit 110, a generating unit 111, a storing unit 112, a detecting unit 113, and a dump unit 114. A module/unit as referred to herein is a series of computer program segments stored in a memory that can be executed by a processor and that can perform a fixed function. In the present embodiment, the functions of the modules/units will be described in detail in the following embodiments.
The writing unit 110 is configured to, when receiving message data in a low latency scenario, append the message data to a log file (i.e., a log file), and write the log file to a disk.
In this embodiment, the low-latency scenario refers to a scenario with a high latency requirement, such as issuing transaction data in a financial scenario.
In this embodiment, the writing unit 110 appending the message data to a log file includes:
acquiring the file volume of each log file;
writing the message data into at least one log file in sequence according to the time sequence and the file volume of each log file;
and sequentially configuring the sequence numbers of the message data according to the writing sequence of the message data, and naming each log file according to the sequence number interval of the message data contained in each log file.
For example: 100 pieces of message data can be written in each log file, the sequence number interval of the first log file can be 001-100, when the first log file is full, the second log file starts to write from 101 until the 200 th message data is written, and so on.
Different from a common log, in the embodiment, the sequence number interval of each log file is used for naming each log file, and the start sequence number and the end sequence number of data stored in each log file can be directly determined according to the file name of the log file, so that the data stored in each log file is determined, and subsequent query and use are facilitated. For example: when data with the sequence number of 001-100 needs to be inquired, the log file named by the sequence number of 001-100 can be directly read without acquiring all files, so that the required file can be conveniently and quickly acquired, and the influence on the analysis and processing efficiency caused by too large read file is avoided.
In this embodiment, according to the characteristic that the log file is written in the disk in sequence, the log file is written in the disk in time sequence to form larger ordered data, and since the message data is written in the disk first, data loss caused by abnormal conditions such as system crash or network crash can be effectively avoided.
The generating unit 111 is configured to acquire each message data in the log file, and generate an index object (index) according to each message data.
In this embodiment, the generating, by the configuration unit 111, the index object according to each message data includes:
acquiring the storage address of each message data in the disk;
generating an index object of each message data according to the storage address of each message data in the disk;
and the sequence number of each index object is sequentially configured according to the generation sequence of each index object, and each index object is named according to the sequence number of each index object.
In the above embodiment, the manner of sequentially configuring the sequence number of each index object according to the generation sequence of each index object and naming each index object according to the sequence number of each index object is similar to the manner of sequentially configuring the sequence number of the message data according to the writing sequence of the message data and naming each log file according to the sequence number interval of the message data included in each log file, which is not described herein again.
For example: the structure of the index object may be: index1| data1_ pos | index2| data2_ pos | index3| data3_ pos.; the sequence number of each index object is sequentially increased according to the generation sequence, and is 1, 2, 3 … …, and so on.
The saving unit 110 is further configured to save the index object to a memory queue.
It can be understood that, since the index may be indexed according to a time sequence, or may be indexed according to other attributes such as a theme, the index object needs to be written into the memory first.
Specifically, the saving, by the write unit 110, the index object to the memory queue includes:
acquiring message attributes of message data corresponding to each index object;
acquiring a preset writing rule;
and writing the index object into the memory queue according to the writing rule and the message attribute.
Wherein the message attributes may include, but are not limited to: generation time, topic type, etc.
Wherein the writing rules may include, but are not limited to: write by time, write by topic type, etc.
By generating the index object according to each message data, the efficiency of data query can be improved.
The detecting unit 113 is configured to detect whether the memory queue is full, and detect a time interval from the last time of transferring the index object in the memory queue to the disk.
The unloading unit 114 is configured to unload the index object in the memory queue from the memory queue to the disk and delete the index object unloaded to the disk from the memory queue when the memory queue and/or the time interval reach a configuration condition.
In this embodiment, before the index object in the memory queue is transferred from the memory queue to the disk, it is further required to detect whether the memory queue and/or the time interval reach the configuration condition.
Specifically, a preconfigured time threshold is obtained before the index object in the memory queue is transferred from the memory queue to the disk;
comparing the time interval to the time threshold;
and when the memory queue is full, and/or the time interval reaches the time threshold value, and an index object in the memory queue is not transferred to the disk, determining that the memory queue and/or the time interval reaches the configuration condition.
For example: when the number of index objects in the memory queue reaches 8192, that is, the memory queue is full, and/or 3 minutes has elapsed since the last time the index objects in the memory queue were transferred from the memory queue to the disk, at this time, no matter whether the index objects in the memory queue reach 8192, as long as there are index objects in the memory queue, it is determined that the memory queue and/or the time interval reach the configuration condition.
Further, the index object in the memory queue is transferred from the memory queue to the disk, and the index object transferred to the disk is deleted from the memory queue, so that persistence of message data is achieved, and different from the case that database storage needs to be separately deployed and maintained, other events at the bottom layer need to be processed.
In this embodiment, after the index object in the memory queue is transferred from the memory queue to the disk, when a data query request is received, determining message data to be queried according to the data query request;
acquiring an index object of the message data to be inquired;
determining the storage address of the message data to be inquired in the disk according to the index object of the message data to be inquired;
and reading message data from the storage address of the message data to be queried in the disk as a response result of the data query request.
Through the embodiment, efficient query on data can be realized based on the index object.
In this embodiment, a message query interface and a message write interface are created;
the message query interface is called to perform data query, and the message write interface is called to perform data write.
In the embodiment, by creating the message query interface and the message write interface which are well encapsulated, data can be directly written or queried in a program by calling the corresponding interfaces, so that the complexity of data reading and writing operations is reduced, the efficiency of data writing can be further improved by combining the characteristic of sequential writing of a disk, binary search is realized by combining an index object, and the reading efficiency and the querying efficiency of the data can be further improved.
According to the technical scheme, the data can be stored at high speed in a low-delay scene, and independent deployment and maintenance are not needed.
Fig. 3 is a schematic structural diagram of a computer device according to a preferred embodiment of the present invention, which implements a data storage method in a low latency scenario.
The computer device 1 may comprise a memory 12, a processor 13 and a bus, and may further comprise a computer program stored in the memory 12 and executable on the processor 13, such as a data storage program in a low latency scenario.
It will be appreciated by those skilled in the art that the schematic diagram is merely an example of the computer device 1, and does not constitute a limitation to the computer device 1, the computer device 1 may be in a bus structure or a star structure, the computer device 1 may include more or less hardware or software than those shown, or different component arrangements, for example, the computer device 1 may further include an input and output device, a network access device, and the like.
It should be noted that the computer device 1 is only an example, and other electronic products that are currently available or may come into existence in the future, such as electronic products that can be adapted to the present invention, should also be included in the scope of the present invention, and are included herein by reference.
The memory 12 includes at least one type of readable storage medium, which includes flash memory, removable hard disks, multimedia cards, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disks, optical disks, etc. The memory 12 may in some embodiments be an internal storage unit of the computer device 1, for example a removable hard disk of the computer device 1. The memory 12 may also be an external storage device of the computer device 1 in other embodiments, such as a plug-in removable hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), etc. provided on the computer device 1. Further, the memory 12 may also include both an internal storage unit and an external storage device of the computer device 1. The memory 12 may be used not only to store application software installed in the computer apparatus 1 and various types of data, such as codes of a data storage program in a low latency scenario, but also to temporarily store data that has been output or is to be output.
The processor 13 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 13 is a Control Unit (Control Unit) of the computer device 1, connects various components of the entire computer device 1 by using various interfaces and lines, and executes various functions and processes data of the computer device 1 by running or executing programs or modules (for example, executing a data storage program in a low latency scenario, etc.) stored in the memory 12 and calling data stored in the memory 12.
The processor 13 executes the operating system of the computer device 1 and various installed application programs. The processor 13 executes the application program to implement the steps in the data storage method embodiments in the low latency scenarios, such as the steps shown in fig. 1.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 12 and executed by the processor 13 to accomplish the present invention. The one or more modules/units may be a series of computer readable instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the computer device 1. For example, the computer program may be divided into a writing unit 110, a generating unit 111, a saving unit 112, a detecting unit 113, and a dump unit 114.
The integrated unit implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions to enable a computer device (which may be a personal computer, a computer device, or a network device) or a processor (processor) to execute a part of the data storage method in the low latency scenario according to the embodiments of the present invention.
The modules/units integrated by the computer device 1 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented.
Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random-access Memory, or the like.
Further, the computer-readable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the blockchain node, and the like.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one line is shown in FIG. 3, but this does not mean only one bus or one type of bus. The bus is arranged to enable connection communication between the memory 12 and at least one processor 13 or the like.
Although not shown, the computer device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 13 through a power management device, so that functions of charge management, discharge management, power consumption management and the like are realized through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The computer device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the computer device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the computer device 1 and other computer devices.
Optionally, the computer device 1 may further comprise a user interface, which may be a Display (Display), an input unit, such as a Keyboard (Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the computer device 1 and for displaying a visualized user interface.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
Fig. 3 shows only the computer device 1 with the components 12-13, and it will be understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the computer device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
With reference to fig. 1, the memory 12 in the computer device 1 stores a plurality of instructions to implement a data storage method in a low latency scenario, and the processor 13 can execute the plurality of instructions to implement:
when message data under a low-delay scene is received, the message data is added to a log file, and the log file is written into a disk;
acquiring each message data in the log file, and generating an index object according to each message data;
storing the index object to a memory queue;
detecting whether the memory queue is full, and detecting a time interval from the last time of transferring the index object in the memory queue to the disk;
and when the memory queue and/or the time interval reach configuration conditions, transferring the index objects in the memory queue from the memory queue to the disk, and deleting the index objects transferred to the disk from the memory queue.
Specifically, the processor 13 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the instruction, which is not described herein again.
It should be noted that all the data involved in the present application are legally acquired.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The invention is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
Furthermore, it will be obvious that the term "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the present invention may also be implemented by one unit or means through software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.
Claims (7)
1. A data storage method under a low-delay scene is characterized by comprising the following steps:
when message data under a low-delay scene is received, the message data is added to a log file, and the log file is written into a disk;
acquiring each message data in the log file, and generating an index object according to each message data;
storing the index object to a memory queue;
detecting whether the memory queue is full, and detecting a time interval from the last time of transferring the index object in the memory queue to the disk;
when the memory queue and/or the time interval reach configuration conditions, the index objects in the memory queue are transferred from the memory queue to the disk, and the index objects transferred to the disk are deleted from the memory queue;
the appending the message data to a log file includes:
acquiring the file volume of each log file;
writing the message data into at least one log file in sequence according to the time sequence and the file volume of each log file;
sequentially configuring the sequence numbers of the message data according to the writing sequence of the message data, and naming each log file according to the sequence number interval of the message data contained in each log file;
writing the log files into the disk according to a time sequence according to the characteristic of sequential writing of the disk;
acquiring a preset time threshold before the index object in the memory queue is transferred from the memory queue to the disk; comparing the time interval to the time threshold; when the memory queue is full, and/or the time interval reaches the time threshold, and an index object in the memory queue is not transferred to the disk, determining that the memory queue and/or the time interval reaches the configuration condition;
after the index objects in the memory queue are transferred from the memory queue to the disk, when a data query request is received, determining message data to be queried according to the data query request; acquiring an index object of the message data to be inquired; determining the storage address of the message data to be inquired in the disk according to the index object of the message data to be inquired; and reading message data from the storage address of the message data to be queried in the disk as a response result of the data query request.
2. The data storage method in a low latency scenario as claimed in claim 1, wherein the generating an index object according to each message data includes:
acquiring the storage address of each message data in the disk;
generating an index object of each message data according to the storage address of each message data in the disk;
and the sequence number of each index object is sequentially configured according to the generation sequence of each index object, and each index object is named according to the sequence number of each index object.
3. The data storage method in a low latency scenario as claimed in claim 1, wherein the saving the index object to a memory queue includes:
acquiring message attributes of message data corresponding to each index object;
acquiring a preset writing rule;
and writing the index object into the memory queue according to the writing rule and the message attribute.
4. The method for storing data in a low latency scenario as recited in claim 1, wherein the method further comprises:
creating a message query interface and a message write-in interface;
the message query interface is called to perform a data query, and the message write interface is called to perform a data write.
5. A data storage device in a low latency scenario, the data storage device in the low latency scenario comprising:
the write-in unit is used for adding the message data to a log file when the message data in a low-delay scene is received, and writing the log file into a disk;
the generating unit is used for acquiring each message data in the log file and generating an index object according to each message data;
the storage unit is used for storing the index object to a memory queue;
the detection unit is used for detecting whether the memory queue is full and detecting a time interval from the last time of transferring the index object in the memory queue to the disk;
a dump unit, configured to dump the index object in the memory queue from the memory queue to the disk and delete the index object dumped to the disk from the memory queue when the memory queue and/or the time interval reach a configuration condition;
the write unit is specifically configured to:
acquiring the file volume of each log file;
writing the message data into at least one log file in sequence according to the time sequence and the file volume of each log file;
sequentially configuring the sequence numbers of the message data according to the writing sequence of the message data, and naming each log file according to the sequence number interval of the message data contained in each log file;
writing the log files into the disk according to a time sequence according to the characteristic of sequential writing of the disk;
acquiring a preset time threshold before the index object in the memory queue is transferred from the memory queue to the disk; comparing the time interval to the time threshold; when the memory queue is full, and/or the time interval reaches the time threshold, and an index object in the memory queue is not transferred to the disk, determining that the memory queue and/or the time interval reaches the configuration condition;
after the index objects in the memory queue are transferred from the memory queue to the disk, when a data query request is received, determining message data to be queried according to the data query request; acquiring an index object of the message data to be inquired; determining the storage address of the message data to be inquired in the disk according to the index object of the message data to be inquired; and reading message data from the storage address of the message data to be queried in the disk as a response result of the data query request.
6. A computer device, characterized in that the computer device comprises:
a memory storing at least one instruction; and
a processor executing the instructions stored in the memory to implement the data storage method in the low latency scenario of any one of claims 1 to 4.
7. A computer-readable storage medium characterized by: the computer-readable storage medium stores at least one instruction, and the at least one instruction is executed by a processor in a computer device to implement the data storage method in a low latency scenario according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211487806.9A CN115543214B (en) | 2022-11-25 | 2022-11-25 | Data storage method, device, equipment and medium in low-delay scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211487806.9A CN115543214B (en) | 2022-11-25 | 2022-11-25 | Data storage method, device, equipment and medium in low-delay scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115543214A CN115543214A (en) | 2022-12-30 |
CN115543214B true CN115543214B (en) | 2023-03-28 |
Family
ID=84722153
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211487806.9A Active CN115543214B (en) | 2022-11-25 | 2022-11-25 | Data storage method, device, equipment and medium in low-delay scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115543214B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104603776A (en) * | 2012-08-08 | 2015-05-06 | 亚马逊技术股份有限公司 | Archival data storage system |
CN109634523A (en) * | 2018-12-10 | 2019-04-16 | 深圳市网心科技有限公司 | A kind of disk management method, system and electronic equipment and storage medium |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11042532B2 (en) * | 2018-08-31 | 2021-06-22 | International Business Machines Corporation | Processing event messages for changed data objects to determine changed data objects to backup |
US11474742B2 (en) * | 2020-06-30 | 2022-10-18 | Microsoft Technology Licensing, Llc | Logging operations based on memory mapping in a logging system |
CN112486913B (en) * | 2020-11-12 | 2023-09-22 | 思创数码科技股份有限公司 | Log asynchronous storage method and device based on cluster environment |
CN114741449A (en) * | 2022-04-14 | 2022-07-12 | 上海川源信息科技有限公司 | Object storage method and device based on distributed database |
-
2022
- 2022-11-25 CN CN202211487806.9A patent/CN115543214B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104603776A (en) * | 2012-08-08 | 2015-05-06 | 亚马逊技术股份有限公司 | Archival data storage system |
CN109634523A (en) * | 2018-12-10 | 2019-04-16 | 深圳市网心科技有限公司 | A kind of disk management method, system and electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115543214A (en) | 2022-12-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111950621A (en) | Target data detection method, device, equipment and medium based on artificial intelligence | |
CN114185776A (en) | Big data point burying method, device, equipment and medium for application program | |
CN115543198A (en) | Method and device for lake entering of unstructured data, electronic equipment and storage medium | |
CN114386509A (en) | Data fusion method and device, electronic equipment and storage medium | |
CN114185895A (en) | Data import and export method and device, electronic equipment and storage medium | |
CN115118738B (en) | Disaster recovery method, device, equipment and medium based on RDMA | |
CN114675976B (en) | GPU (graphics processing Unit) sharing method, device, equipment and medium based on kubernets | |
CN114547696A (en) | File desensitization method and device, electronic equipment and storage medium | |
CN114860349B (en) | Data loading method, device, equipment and medium | |
CN115543214B (en) | Data storage method, device, equipment and medium in low-delay scene | |
CN114816371B (en) | Message processing method, device, equipment and medium | |
CN114756564B (en) | Data processing method, device, equipment and medium for stream computing | |
CN112256472A (en) | Distributed data calling method and device, electronic equipment and storage medium | |
CN113687834B (en) | Distributed system node deployment method, device, equipment and medium | |
CN114626103A (en) | Data consistency comparison method, device, equipment and medium | |
CN114840388A (en) | Data monitoring method and device, electronic equipment and storage medium | |
CN114911479A (en) | Interface generation method, device, equipment and storage medium based on configuration | |
CN112667570A (en) | File access method, device, equipment and readable storage medium | |
CN113076069A (en) | Double-cache method and device, electronic equipment and computer readable storage medium | |
CN115065642B (en) | Code table request method, device, equipment and medium under bandwidth limitation | |
CN116483747B (en) | Quotation snapshot issuing method, device, equipment and medium | |
CN113657076B (en) | Page operation record table generation method and device, electronic equipment and storage medium | |
CN117851520B (en) | Data synchronization method, system, equipment and medium of securities core transaction engine | |
CN113434365B (en) | Data characteristic monitoring method and device, electronic equipment and storage medium | |
CN114139199A (en) | Data desensitization method, apparatus, device and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |