CN112783656A - Memory management method, medium, device and computing equipment - Google Patents
Memory management method, medium, device and computing equipment Download PDFInfo
- Publication number
- CN112783656A CN112783656A CN202110127352.3A CN202110127352A CN112783656A CN 112783656 A CN112783656 A CN 112783656A CN 202110127352 A CN202110127352 A CN 202110127352A CN 112783656 A CN112783656 A CN 112783656A
- Authority
- CN
- China
- Prior art keywords
- memory
- data
- storage space
- capacity
- ratio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000015654 memory Effects 0.000 title claims abstract description 391
- 238000007726 management method Methods 0.000 title claims abstract description 39
- 238000003860 storage Methods 0.000 claims abstract description 191
- 238000000034 method Methods 0.000 claims abstract description 48
- 238000004140 cleaning Methods 0.000 claims abstract description 8
- 238000004064 recycling Methods 0.000 claims description 88
- 230000008569 process Effects 0.000 claims description 18
- 230000005012 migration Effects 0.000 claims description 15
- 238000013508 migration Methods 0.000 claims description 15
- 238000012544 monitoring process Methods 0.000 claims description 14
- 238000012216 screening Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 230000000875 corresponding effect Effects 0.000 description 48
- 238000010586 diagram Methods 0.000 description 15
- 238000004458 analytical method Methods 0.000 description 13
- 238000011084 recovery Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 10
- 238000013507 mapping Methods 0.000 description 9
- 238000012545 processing Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000007405 data analysis Methods 0.000 description 3
- 230000008451 emotion Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 108091026890 Coding region Proteins 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0608—Saving storage space on storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5022—Mechanisms to release resources
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Memory System (AREA)
Abstract
The embodiment of the disclosure provides a memory management method, a medium, a device and a computing device. The memory at least comprises a first storage space for storing first data and a second storage space for storing second data, and the method comprises the following steps: acquiring garbage collection time, wherein the garbage collection time represents the time spent on cleaning the garbage objects in the memory; and adjusting a first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection time consumption and a preset garbage collection time consumption threshold. The embodiment of the disclosure can improve the utilization rate of the memory and avoid the overflow of the memory.
Description
Technical Field
Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to a memory management method, medium, apparatus, and computing device.
Background
This section is intended to provide a background or context to the embodiments of the disclosure recited in the claims. The description herein is not admitted to be relevant prior art by inclusion in this section.
The ratio of the amount of data stored in the memory to the memory capacity is referred to as the memory utilization. If the memory capacity allocated to the data is too large, the memory may not be fully utilized, that is, the memory utilization rate is too high; if the memory capacity allocated for data is too small, there is a risk of memory overflow due to the data amount being larger than the memory capacity. Therefore, the setting of the memory is the key for improving the utilization rate of the memory and avoiding the overflow of the memory, and the related technology cannot reasonably set the memory.
Disclosure of Invention
The present disclosure is intended to provide a memory management method and apparatus.
In a first aspect of embodiments of the present disclosure, a memory management method is provided, where the memory includes at least a first storage space for storing first data and a second storage space for storing second data, and the method includes:
acquiring garbage collection time, wherein the garbage collection time represents the time spent on cleaning the garbage objects in the memory;
and adjusting a first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection time consumption and a preset garbage collection time consumption threshold.
In one embodiment of the present disclosure, the first storage space is a strongly-referenced memory space, and the second storage space is a soft-referenced memory space.
In one embodiment of the present disclosure, further comprising:
determining the capacity of the first storage space according to the first ratio;
under the condition that the data amount in the memory is larger than the capacity of the first storage space, migrating the first data exceeding the capacity part of the first storage space to the second storage space; or,
and under the condition that the data amount in the first storage space is smaller than the capacity of the first storage space and the second data exists in the second storage space, migrating at least part of the second data in the second storage space to the first storage space.
In one embodiment of the present disclosure, migrating the first data exceeding the capacity portion of the first storage space to the second storage space includes:
and screening third data to be migrated from the first data based on a preset data migration strategy, and migrating the third data from the first storage space to the second storage space.
In one embodiment of the present disclosure, obtaining the garbage collection time includes:
searching a preset first corresponding relation according to the capacity of the memory to obtain a corresponding first ratio suggestion value and a garbage recycling time-consuming suggestion value; the first corresponding relation represents a first ratio suggestion value and a garbage recycling time consumption suggestion value corresponding to different memory capacities;
and determining the garbage collection time consumption according to the garbage collection time consumption suggestion value.
In an embodiment of the disclosure, adjusting a first ratio between the capacity of the first storage space and the capacity of the memory according to the garbage collection consumed time and a preset garbage collection consumed time threshold includes:
and adjusting the first ratio to be equal to the first ratio suggestion value under the condition that the garbage collection consumed time is not greater than the garbage collection consumed time threshold value.
In an embodiment of the disclosure, adjusting a first ratio between the capacity of the first storage space and the capacity of the memory according to the garbage collection consumed time and a preset garbage collection consumed time threshold includes:
determining the first ratio to be a value smaller than the first ratio suggested value under the condition that the garbage recycling consumed time is larger than the garbage recycling consumed time threshold value;
determining the capacity of the first storage space according to the first ratio, and migrating the first data exceeding the capacity of the first storage space to the second storage space under the condition that the data amount in the memory is larger than the capacity of the first storage space;
acquiring a current garbage recycling time-consuming actual value from a garbage recycling log, reducing the value of the first ratio under the condition that the garbage recycling time-consuming actual value is larger than the garbage recycling time-consuming threshold value, and executing the step of determining the capacity of the first storage space again based on the reduced first ratio; and ending the adjusting process under the condition that the actual value of the garbage recycling consumed time is not greater than the garbage recycling consumed time threshold value.
In one embodiment of the present disclosure, determining the first ratio to be a value less than the first ratio recommendation value includes: determining the first ratio as half of the first ratio recommendation value;
reducing the value of the first ratio, including: and adjusting the first ratio to be half of the original value of the first ratio.
In an embodiment of the disclosure, the adjusting process is ended until the actual value of the garbage collection time is not greater than the threshold value of the garbage collection time, including:
and ending the adjusting process under the condition that the actual garbage recycling time consumption value is not larger than the garbage recycling time consumption threshold value and the difference between the actual garbage recycling time consumption value and the garbage recycling time consumption threshold value is not larger than a preset threshold value.
In one embodiment of the present disclosure, further comprising:
under the condition of different memory capacities, counting different values of the first ratio and corresponding garbage recycling time consumption counting values;
according to the statistical result, establishing a mapping relation between the first ratio and the garbage recycling time consumption statistical value under the condition of different memory capacities;
and determining the first corresponding relation according to the mapping relation.
In an embodiment of the present disclosure, the memory further includes an index area for storing an index of the first data and the second data.
In one embodiment of the present disclosure, further comprising:
determining second data which is cleared under the condition that the second data in the second storage space is cleared; and updating the index according to the cleared second data.
In one embodiment of the present disclosure, further comprising:
storing the second data cleared in the second storage space into a database;
reading the purged second data from the database;
when new data is stored in the memory, clustering the new data and the removed second data, and if clustering is successful, storing the clustered data into the memory;
and updating the index according to the clustered data.
In one embodiment of the present disclosure, the first data and the second data include public opinion text data;
the method further comprises the following steps: clustering public opinion text data in the memory;
and performing at least one of propagation path analysis, emotion data analysis and public opinion trend analysis on the clustered data.
In a second aspect of the disclosed embodiments, there is provided a memory management device, the memory including at least a first storage space for storing first data and a second storage space for storing second data, the device comprising:
the data monitoring and counting module is used for acquiring garbage collection time, and the garbage collection time represents the time spent on cleaning the garbage objects in the memory;
and the proportion dynamic adjustment module is used for adjusting a first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection time consumption and a preset garbage collection time consumption threshold.
In one embodiment of the present disclosure, the first storage space is a strongly-referenced memory space, and the second storage space is a soft-referenced memory space.
In one embodiment of the present disclosure, the apparatus further comprises:
the data migration module is used for determining the capacity of the first storage space according to the first ratio; under the condition that the data amount in the memory is larger than the capacity of the first storage space, migrating the first data exceeding the capacity part of the first storage space to the second storage space; or, in the case that the data amount in the first storage space is smaller than the capacity of the first storage space and second data exists in the second storage space, migrating at least part of the second data in the second storage space to the first storage space.
In one embodiment of the disclosure, the data migration module is to:
and screening third data to be migrated from the first data based on a preset data migration strategy, and migrating the third data from the first storage space to the second storage space.
In one embodiment of the disclosure, the data monitoring statistics module is configured to:
searching a preset first corresponding relation according to the capacity of the memory to obtain a corresponding first ratio suggestion value and a garbage recycling time-consuming suggestion value; the first corresponding relation represents a first ratio suggestion value and a garbage recycling time consumption suggestion value corresponding to different memory capacities; and determining the garbage collection time consumption according to the garbage collection time consumption suggestion value.
In one embodiment of the disclosure, the dynamic scale adjustment module is configured to:
and adjusting the first ratio to be equal to the first ratio suggestion value under the condition that the garbage collection consumed time is not greater than the garbage collection consumed time threshold value.
In one embodiment of the disclosure, the dynamic scale adjustment module is configured to:
determining the first ratio to be a value smaller than the first ratio suggested value under the condition that the garbage recycling consumed time is larger than the garbage recycling consumed time threshold value;
determining the capacity of the first storage space according to the first ratio, and migrating the first data exceeding the capacity of the first storage space to the second storage space under the condition that the data amount in the memory is larger than the capacity of the first storage space;
acquiring a current garbage recycling time-consuming actual value from a garbage recycling log, reducing the value of the first ratio under the condition that the garbage recycling time-consuming actual value is larger than the garbage recycling time-consuming threshold value, and executing the step of determining the capacity of the first storage space again based on the reduced first ratio; and ending the adjusting process under the condition that the actual value of the garbage recycling consumed time is not greater than the garbage recycling consumed time threshold value.
In one embodiment of the disclosure, the dynamic ratio adjustment module determines the first ratio to be half of the first ratio recommendation value when determining the first ratio to be a value less than the first ratio recommendation value;
the dynamic ratio adjusting module adjusts the first ratio to be half of the original value of the first ratio when the value of the first ratio is reduced.
In an embodiment of the disclosure, the dynamic proportion adjustment module ends the adjustment process when the actual value of the garbage collection consumed time is not greater than the garbage collection consumed time threshold and a difference between the actual value of the garbage collection consumed time and the garbage collection consumed time threshold is not greater than a preset threshold.
In one embodiment of the present disclosure, the data monitoring statistics module is further configured to:
under the condition of different memory capacities, counting different values of the first ratio and corresponding garbage recycling time consumption counting values;
according to the statistical result, establishing a mapping relation between the first ratio and the garbage recycling time consumption statistical value under the condition of different memory capacities;
and determining the first corresponding relation according to the mapping relation.
In an embodiment of the present disclosure, the memory further includes an index area for storing an index of the first data and the second data.
In one embodiment of the present disclosure, further comprising:
the data synchronization module is used for determining the second data which is cleared under the condition that the second data in the second storage space is cleared; and updating the index according to the cleared second data.
In an embodiment of the present disclosure, the system further includes an incremental offline clustering module, configured to:
storing the second data cleared in the second storage space into a database;
reading the cleared second data from the database;
when new data is stored in the memory, clustering the new data and the removed second data, and if clustering is successful, storing the clustered data into the memory;
and updating the index according to the clustered data.
In one embodiment of the present disclosure, the first data and the second data include public opinion text data;
the device also includes:
the memory area clustering module is used for clustering public opinion text data in the memory;
and the analysis module is used for performing at least one of propagation path analysis, emotion data analysis and public opinion trend analysis on the clustered data.
In a third aspect of the disclosed embodiments, a computer-readable medium is provided, on which a computer program is stored, which when executed by a processor implements the steps of the above-described memory management method.
In a fourth aspect of embodiments of the present disclosure, there is provided a computing device comprising: the memory, the processor and the computer program stored on the memory and capable of running on the processor realize the steps of the memory management method when the processor executes the program.
According to the memory management method and device disclosed by the embodiment of the disclosure, the ratio of the capacity of the first storage space in the memory to the capacity of the memory is adjusted according to the garbage collection time consumption and the preset garbage collection time consumption threshold, so that the capacity ratio of different storage spaces in the memory is reasonably adjusted, the memory utilization rate is improved, and the memory overflow is avoided.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 is a framework diagram of an exemplary application scenario of an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flow chart of an implementation of a memory management method according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates an implementation flowchart of migrating a first storage space and/or data stored in the first storage space according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a memory structure and a memory management method according to an embodiment of the disclosure;
fig. 5 is a schematic diagram illustrating a structure of a memory management and control module and a method for adjusting a capacity fraction according to an embodiment of the disclosure
FIG. 6 schematically shows a flowchart for implementing garbage collection time acquisition according to an embodiment of the present disclosure;
FIG. 7 is a diagram schematically illustrating a correspondence between a strong reference memory fraction and a garbage collection time according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates an implementation flow diagram for adjusting a first ratio of the capacity of the first storage space to the memory capacity, according to an embodiment of the present disclosure;
FIG. 9 is a schematic diagram illustrating an implementation of secondary clustering of obsolete data in a soft-reference memory region by an incremental offline clustering module according to an embodiment of the present disclosure;
FIG. 10 is a flow chart illustrating an implementation of secondary clustering of obsolete data by the incremental offline clustering module according to an embodiment of the present disclosure;
FIG. 11 schematically illustrates a flow diagram of data processed by an incremental offline clustering module according to an embodiment of the present disclosure;
FIG. 12 schematically shows a medium diagram for a memory management method according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of a memory management device according to an embodiment of the disclosure;
FIG. 14 schematically shows a structural diagram of a computing device according to an embodiment of the present disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present disclosure will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the present disclosure, and are not intended to limit the scope of the present disclosure in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to an embodiment of the disclosure, a memory management method, medium, device and computing equipment are provided.
In this document, any number of elements in the drawings is by way of example and not by way of limitation, and any nomenclature is used solely for differentiation and not by way of limitation.
The principles and spirit of the present disclosure are explained in detail below with reference to several representative embodiments of the present disclosure.
Summary of The Invention
The inventor finds that the memory cannot be reasonably set in the related technology, so that the memory cannot be fully utilized or the risk of memory overflow exists.
In view of this, the present disclosure provides a memory management method and apparatus, which divide a memory into at least a first storage space for storing first data and a second storage space for storing second data. One of the storage spaces (the first storage space or the second storage space) after the memory is divided can be subjected to garbage collection, and when the memory space is insufficient, the storage space can be collected, so that the problem of memory overflow is avoided; because the memory space can be recovered in real time, the excessive memory capacity does not need to be allocated in advance, and the memory utilization rate can be improved.
Having described the general principles of the present disclosure, various non-limiting embodiments of the present disclosure are described in detail below.
Application scene overview
Referring first to fig. 1, fig. 1 is a block diagram of an exemplary application scenario of an embodiment of the present disclosure. Wherein, the user interacts with a server 102 for public opinion text clustering and analysis through a client 101 on the user equipment. Those skilled in the art will appreciate that the frame diagram shown in fig. 1 is only one example in which embodiments of the present disclosure may be implemented. The scope of applicability of the disclosed embodiments is not limited in any way by this framework.
It is noted that the client herein may be any user device now existing, developing or developed in the future that is capable of interacting with the server 102 via any form of limited and/or wireless connection (e.g., Wi-Fi, LAN, cellular mobile communications, coaxial cable, etc.); including but not limited to: existing, developing, or future developing smartphones, non-smartphones, tablets, laptop personal computers, desktop personal computers, minicomputers, midrange computers, mainframe computers, and the like.
It is also noted that the server 102 herein is only one example of an existing, developing or future developed device capable of providing a user with a public opinion text clustering and analysis service. Embodiments of the present disclosure are not limited in any way in this respect.
Based on the framework shown in fig. 1, the server 102 may store the public sentiment text data in a memory in response to receiving the public sentiment text data, and the memory may be divided into a strong-reference memory space including at least the strong-reference memory space for storing the strong-reference data and a soft-reference memory space for storing the soft-reference data. The server can adjust the ratio of the capacity of the strongly-referenced memory space to the memory capacity according to the garbage collection time consumption and a preset garbage collection time consumption threshold. Under the condition that the strong reference memory space is free, the server can preferentially store the received public opinion text data in the strong reference memory space. Because the data stored in the soft reference memory space is allowed to be cleared by garbage collection operation, when the memory space is insufficient, the data overflow can be avoided by a mode of collecting the memory of the corresponding soft reference memory space; in addition, the memory space can be recycled in real time, so that overlarge memory capacity does not need to be allocated in advance, and the memory utilization rate can be improved.
Exemplary method
A memory management method according to an exemplary embodiment of the present disclosure is described below with reference to fig. 2. In this disclosure, a memory is divided into at least a first storage space for storing first data and a second storage space for storing second data, as shown in fig. 2, the memory management method of this disclosure includes the following steps:
s21: acquiring garbage collection time, wherein the garbage collection time represents the time spent on cleaning the garbage objects in the memory;
s22: and adjusting a first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection time consumption and a preset garbage collection time consumption threshold.
Through the manner, the memory is divided into the first storage control space and the second storage space, so that one of the storage spaces can be set to be capable of being subjected to garbage collection, and data allowed to be deleted is stored in the storage space; another memory space is set to be garbage-collected incapable of storing data in the memory space that cannot be cleared by a garbage-collection operation. Since garbage collection is allowed to be performed on one of the storage spaces, when the memory space is insufficient, data overflow can be avoided by means of collecting the memory of the corresponding storage space; in addition, the memory space can be recycled in real time, so that overlarge memory capacity does not need to be allocated in advance, and the memory utilization rate can be improved.
In an embodiment, the first storage space may be a strongly-referenced memory space, and the second storage space may be a soft-referenced memory space. Accordingly, the first data stored in the first memory space is a strongly referenced object and the second data stored in the second memory space is a soft referenced object. In the JAVA virtual machine technology, strongly referenced objects cannot be garbage collected and soft referenced objects can be garbage collected. Garbage collection is a memory management function that can actively discover objects in a program that are no longer used and clean up those objects that are no longer used, thereby releasing more memory.
In the above embodiment, the memory is divided into the strong reference memory space and the soft reference memory space, and the ratio of the capacity of the strong reference memory space to the capacity of the memory is adjusted according to the garbage collection time consumption and the preset garbage collection time consumption threshold. Because the soft reference objects stored in the soft reference memory space are allowed to be cleared by garbage collection operation, when the memory space is insufficient, data overflow can be avoided by means of collecting the memory of the corresponding memory space; in addition, the memory space can be recycled in real time, so that overlarge memory capacity does not need to be allocated in advance, and the memory utilization rate can be improved.
In another embodiment, the first storage space may be a soft-reference memory space, and the second storage space may be a strong-reference memory space; accordingly, the first data stored in the first memory space is a soft-reference object and the second data stored in the second memory space is a strong-reference object.
In the following description, the first storage space is taken as a strong reference memory space, and the second storage space is taken as a soft reference memory space.
From the perspective of memory use effect, the larger the strong reference memory space proportion is, the better the strong reference memory space proportion is, but the larger the strong reference memory space proportion is, the larger the garbage collection time consumption is caused. Therefore, in order to take account of the memory use effect and the garbage collection time consumption performance into account, the capacity occupation ratio of the first storage space is adjusted according to the garbage collection time consumption monitored in real time and the preset garbage collection time consumption threshold (since the memory mainly comprises the first storage space and the second storage space, the capacity occupation ratio of the first storage space is adjusted, that is, the capacity occupation ratio of the second storage space is adjusted at the same time), so that the capacity of each storage space in the memory is reasonably distributed, the occupation ratio of the memory space is strongly referenced as much as possible under the condition that the garbage collection time consumption threshold is met, and the use effect of the memory and the garbage collection time consumption performance are balanced.
In a possible embodiment, after adjusting the first ratio of the capacity of the first storage space to the capacity of the memory, the first storage space and/or data stored in the first storage space may be further migrated, and fig. 3 schematically illustrates a flowchart for migrating the first storage space and/or data stored in the first storage space according to an embodiment of the present disclosure, where the flowchart includes:
s31: determining the capacity of the first storage space according to the first ratio;
s32: under the condition that the data amount in the memory is larger than the capacity of the first storage space, migrating the first data exceeding the capacity part of the first storage space to a second storage space; or, in the case that the data amount in the first storage space is smaller than the capacity of the first storage space and the second data exists in the second storage space, migrating at least part of the second data in the second storage space to the first storage space.
In the process, the capacity of the first storage space is determined according to the first ratio, and when the data in the memory is larger than the capacity of the first storage space, the excess part is migrated to the second storage space, so that the first storage space can be ensured not to have the condition of data overflow all the time; and when the data volume in the first storage space is smaller than the capacity of the first storage space, the data in the first storage space is migrated to the first storage space, so that the data can be guaranteed to be preferentially stored in the first storage space.
By taking the example that the first data and the second data stored in the memory include public opinion text data, the public opinion text data in the memory can be further clustered. In order to realize the clustering of public opinion text data, a reverse index and a forward index need to be constructed, a main key stored in the reverse index is a code of a word, and a value is a corresponding text document code set; the positive order index stores that the primary key is the document encoding and the value is the corresponding text vector. Through the construction of the inverted index, candidate similar texts meeting the conditions are found first, and global matching is avoided. And finding all texts containing the words in the inverted index according to the word coding sequence number contained in the text vector to be matched. Therefore, candidate similar texts can be quickly inquired, global matching of all texts is avoided, and clustering efficiency is improved.
Fig. 4 schematically illustrates a memory structure and a memory management method according to an embodiment of the disclosure. As shown in fig. 4, the memory area includes a strong reference memory area and a soft reference memory area, where the strong reference memory area and the soft reference memory area store public opinion text data; the memory region may also include an index region for storing an index of first data (e.g., data stored in a strongly-referenced memory region) and second data (e.g., data stored in a soft-referenced memory region).
In fig. 4, the data stored in the strongly-referenced memory region includes strongly-referenced cluster category information, such as hash map (HashMap) in fig. 4; the data stored in the soft-reference memory region includes soft-reference cluster category information, such as soft hash map (SoftHashMap) in fig. 4. The strongly referenced clustering category information cannot be subjected to garbage collection and can only be manually deleted; the cluster category information of the soft references may be garbage collected. The index area stores an inverted index, which is specifically an index map (IndexMap) in fig. 4, and the inverted index cannot be garbage-collected.
As shown in fig. 4, a part of data is stored in the strong-reference memory area, and the data that needs to be eliminated is placed in the soft-reference memory area. The soft reference memory area can be recycled when the memory of the JAVA virtual machine is insufficient, so that the problem of memory overflow can be avoided. Due to the existence of the soft reference memory area, the storage data volume of the clustering information can be automatically matched with the memory size of the JAVA virtual machine, the memory utilization rate is improved, and meanwhile, the risk of memory overflow is avoided, so that the memory management problem in the text data clustering process in the public opinion analysis field is solved.
In some embodiments, migrating the first data beyond the capacity portion of the first storage space to the second storage space includes:
and screening third data to be migrated from the first data based on a preset data migration strategy, and migrating the third data from the first storage space to the second storage space.
The preset data migration policy includes, but is not limited to, First In First Out (FIFO), Least Recently Used (LRU), Least Frequently Used (LFU) and Least Frequently Used (First frequency Out (FIFO) policies.
As shown in fig. 4, in order to reasonably use the strongly-referenced memory region and the soft-referenced memory region, a memory management and control module is introduced in some embodiments. The module can be divided into three sub-modules, namely a data migration module, a data monitoring and counting module and a strong soft memory area proportion dynamic adjusting module (a proportion dynamic adjusting module for short).
The data monitoring and counting module monitors the time consumed by garbage recovery in real time. The proportion dynamic adjustment module dynamically adjusts a first ratio of the capacity of the strongly-referenced memory region to the capacity of the whole memory according to the garbage recycling consumed time and the garbage recycling consumed time threshold value, and the memory is fully utilized on the premise that the garbage recycling consumed time monitored in real time does not exceed the garbage recycling consumed time threshold value. And the data migration module performs data migration according to the first ratio.
After the data in the soft reference memory region is cleaned by the JAVA virtual machine, the consistency between the inverted index memory data and the clustering information memory data needs to be maintained. To ensure the foregoing consistency, the present disclosure may further include: determining second data which is cleared under the condition that the second data in the second storage space is cleared; and updating the index according to the cleared second data. As shown in fig. 4, a data synchronization module is introduced in some embodiments. And the data synchronization module is used for ensuring the data synchronization of the inverted index and the clustering information, and calling a callback method through the clustering information KEY after the data in the soft reference memory area is cleared by the JAVA virtual machine so as to synchronously clear the data in the inverted index.
Further, after some data in the soft reference memory area is cleared, there is a case where clustering fails on the public opinion text data for a long period of the public opinion text data. In order to avoid the case of clustering failure, the present disclosure may further include: storing the second data cleared in the second storage space into a database; reading the purged second data from the database; when new data is stored in the memory, clustering the new data and the removed second data, and if clustering is successful, storing the clustered data in the memory; and updating the index according to the clustered data. As shown in fig. 4, in some embodiments, an incremental offline clustering module is introduced, and the data synchronization module records a time range of memory obsolete data, and actively triggers the incremental offline clustering module to perform secondary clustering on data in a specific time window.
The above-mentioned content is combined with fig. 4 to briefly describe each step of the memory management method proposed by the present disclosure, and a module for executing each step. The above modules are only examples, and the present disclosure may also adopt other related modules to perform the above steps. The above steps and corresponding modules are described in detail below.
First portion, soft-reference memory area:
in some embodiments, the soft reference memory area is used for storing public opinion text clustering information. Specifically, the inverted index data is stored in the strongly-referenced memory area, and the data in the memory area can only be deleted manually and cannot be cleared actively by the JAVA virtual machine. The clustering information is stored in the strong reference memory area and the soft reference memory area, for example, data in a recent period of time is stored in the strong reference memory area, outdated data is eliminated according to an LRU strategy, and outdated input is stored in the soft reference memory area. The soft reference memory area can be recycled when the memory of the JAVA virtual machine is insufficient, so that the problem of memory overflow can be avoided. Due to the existence of the soft reference memory area, the storage data volume of the clustering information can be automatically matched with the memory size of the JAVA virtual machine, and meanwhile, the risk of memory overflow is avoided, so that the problem of long-period text clustering memory management in the public opinion analysis field is solved.
The method can transmit the eliminated soft reference object in the soft reference memory area into a memory queue by using a soft reference (SoftReference) object in JAVA, then call get function to obtain the eliminated soft reference object in the memory queue, and then delete the eliminated soft reference object.
The strong-reference memory area and the soft-reference memory area both store public opinion text clustering information, so that when the public opinion text clustering information is read, the public opinion text clustering information can be firstly inquired from the strong-reference memory area, and if the public opinion text clustering information can be inquired, the public opinion text clustering information is directly returned; and if the query is not available, querying in the soft reference memory area, and returning the queried public opinion text clustering information. Therefore, the division of the soft reference memory area does not obstruct the reading of the public opinion text clustering information.
The second part, the memory management and control module:
the memory management and control module is used for adjusting the capacity ratio of the strong reference memory area and the soft reference memory area in the memory. Fig. 5 schematically illustrates a structure of a memory management and control module and a capacity fraction adjustment method according to an embodiment of the disclosure. As shown in fig. 5, the memory management and control module is divided into three sub-modules, which are a data migration module, a data monitoring and counting module, and a dynamic ratio adjustment module (called a dynamic ratio adjustment module for short) for a strong soft memory area. And the data migration module adopts a preset strategy to move the strong reference data with the data volume larger than the capacity of the strong reference memory area to the soft reference memory area. The soft-reference memory area can continue to access data, but is cleared by the JAVA virtual machine when the memory usage rate reaches 100%. And the data monitoring and counting module monitors the counting data of the capacity proportion distribution and the garbage recycling time consumption of the strong and soft memory areas. Through statistical data, the proportion of the capacity of the strong reference memory region and the soft reference memory region is dynamically adjusted by the proportion dynamic adjustment module, and the memory allocation proportion can be rapidly adjusted on the basis of fully utilizing the memory, so that the garbage collection time is kept in a reasonable range. The preset policies include, but are not limited to, First In First Out (FIFO), Least Recently Used (LRU), Least Frequently Used (LFU) and Least Frequently Used (last queue Used) policies.
The proportion dynamic adjustment module acquires the garbage collection time consumption, and adjusts a first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection time consumption and a preset garbage collection time consumption threshold. Fig. 6 schematically shows a flowchart for implementing obtaining garbage collection time according to an embodiment of the present disclosure, including:
s61: searching a preset first corresponding relation according to the capacity of the memory to obtain a corresponding first ratio suggestion value and a garbage recycling time-consuming suggestion value; the first corresponding relation represents a first ratio suggestion value and a garbage recycling time consumption suggestion value corresponding to different memory capacities;
s62: and determining the garbage collection time consumption according to the garbage collection time consumption suggestion value.
Table 1 is an example of the first correspondence relationship described above.
TABLE 1
Memory capacity | 1G | 2G | 4G | 8G | 16G | 32G |
First |
50% | 65% | 75% | 85% | 90% | 95% |
Recommended value of time consumption for garbage recovery | 45ms | 75ms | 100ms | 200ms | 250ms | 300ms |
Taking table 1 as an example, if the memory capacity is 4G, according to the first corresponding relationship shown in table 1, the first ratio recommended value corresponding to the memory capacity of 4G can be found to be 75%, and the recommended value for garbage recycling time is 100 ms.
In table 1 above, the first ratio recommended value also increases from 50% to about 95% as the memory size increases. The reason is that as the memory increases, the memory size reserved mainly for the soft reference area remains within a reasonable range under the condition that the data size is not changed, and the time consumption can be stabilized within a certain threshold value.
Accordingly, the present disclosure may further include a process of establishing the first corresponding relationship, including:
under the condition of different memory capacities, counting different values of the first ratio and corresponding garbage recycling time consumption counting values;
according to the statistical result, establishing a mapping relation between the first ratio and the garbage recycling time consumption statistical value under the condition of different memory capacities;
and determining a first corresponding relation according to the mapping relation.
According to the method and the device, under the condition that the data monitoring and counting module is adopted to count the memories with different capacities, different values of the first ratio and corresponding garbage recycling time-consuming counting values can be obtained.
The larger the proportion of the strongly referenced memory is, the longer the time consumed by one-time full garbage collection is. In short, the larger the strong reference ratio is, the more objects remain after one-time full garbage collection, so that the probability of root node scanning and memory fragmentation is higher, and the time required for memory arrangement in the garbage collection process is increased. More seriously, the strong references occupy a large number of long-period objects, and the memory is used up in a short time, so the frequency of the full garbage collection is obviously increased. Fig. 7 schematically illustrates a correspondence between a strong reference memory occupation ratio and a garbage collection time according to an embodiment of the present disclosure. As shown in fig. 7, the abscissa is the ratio of strongly-referenced memories, and the ordinate is the product of the single garbage collection actual t and the garbage collection frequency F in unit time, which represents the comprehensive time consumption of garbage collection in a period of time. As shown in fig. 7, in the case of the memory size of 8G, when the percentage of the strongly-referenced memory is greater than 85%, the product of the single time consumption and the occurrence frequency of the full garbage collection increases approximately exponentially. Therefore, in the case where the memory capacity is 8G, the 85% ratio can be considered as the first ratio suggested value in table 1. It can be seen that the first proportional recommended value may be regarded as a data inflection point, or may be understood as a risk critical point of the system, and when the risk critical point is greater than the critical point, the garbage collection time and frequency increase sharply, which finally results in data backlog, and the more the garbage collection time is, the more the system may be crashed in a severe case.
Therefore, the adjusting the first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection time consumption and the preset garbage collection time consumption threshold may include: and under the condition that the garbage collection time consumption is not greater than the garbage collection time consumption threshold value, adjusting the first ratio to be equal to the first ratio suggestion value.
Optionally, the adjusting the first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection time consumption and a preset garbage collection time consumption threshold may further include:
determining the first ratio as a numerical value smaller than the first ratio suggested value under the condition that the garbage recycling consumed time is larger than a garbage recycling consumed time threshold value;
determining the capacity of a first storage space according to a first ratio, and migrating first data exceeding the capacity of the first storage space to a second storage space under the condition that the data amount in a memory is larger than the capacity of the first storage space;
acquiring a current garbage recycling time-consuming actual value from a garbage recycling log, reducing the value of the first ratio under the condition that the garbage recycling time-consuming actual value is larger than a garbage recycling time-consuming threshold value, and executing the step of determining the capacity of the first storage space again based on the reduced first ratio; and ending the adjusting process under the condition that the actual value of the garbage recycling consumed time is not greater than the garbage recycling consumed time threshold value.
In some embodiments, the determining the first ratio may use a binary search method, for example, in the determining, the determining the first ratio as a value smaller than the first ratio suggestion value includes: determining the first ratio as half of the first ratio recommendation value;
reducing the value of the first ratio, including: and adjusting the first ratio to be half of the original value of the first ratio.
The garbage recovery time-consuming threshold can be determined according to the real-time requirement of text clustering, if the real-time requirement is high, the garbage recovery time-consuming threshold can be set to be smaller, and if the real-time requirement of clustering is not high, the garbage recovery time-consuming threshold can be set to be larger appropriately, so that higher memory utilization rate can be obtained as much as possible, and the clustering effect is improved.
In addition, in the adjusting process, the condition for finishing the adjustment may further include that a difference between the actual value of the consumed garbage recycling time and the threshold value of the consumed garbage recycling time is not greater than a preset threshold, so that on the premise that the actual value of the consumed garbage recycling time is not greater than the threshold value of the consumed garbage recycling time, a larger strong reference memory occupation ratio is set as much as possible, and the memory use effect is improved.
Optionally, the ending the adjustment process until the actual value of the consumed garbage recycling time is not greater than the threshold value of the consumed garbage recycling time includes: and ending the adjusting process under the condition that the actual value of the garbage recycling consumed time is not greater than the garbage recycling consumed time threshold value and the difference between the actual value of the garbage recycling consumed time and the garbage recycling consumed time threshold value is not greater than a preset threshold value.
Fig. 8 schematically illustrates a flowchart for implementing an adjustment of a first ratio of the capacity of the first storage space to the memory capacity, in the example illustrated in fig. 8, the first storage space is a strong reference memory, and the first ratio is referred to as a strong reference memory ratio. The process shown in fig. 8 includes the following steps
S81: and reading the memory capacity.
S82: searching a first corresponding relation (as shown in table 1) according to the memory capacity, and determining a garbage recycling time-consuming suggestion value corresponding to the memory capacity; and determining the garbage collection time consumption (marked as t) corresponding to the current memory capacity as being equal to the garbage collection time consumption suggested value.
S83: and judging whether T is not larger than a garbage recycling time-consuming threshold (marked as T). And if T is less than or equal to T, determining the strong reference memory occupation ratio (marked as r) as being equal to a first ratio suggestion value corresponding to the memory capacity in the table 1, and outputting the strong reference memory occupation ratio. If T > T, step S84 is executed.
Wherein, the garbage collection time-consuming threshold (i.e. T) can be determined according to the real-time requirement of the text clustering. For example, when different types of public opinion text data are clustered, the corresponding text clustering instantaneity requirements are different; therefore, the type of the public opinion text data stored in the memory can be determined firstly, the corresponding text clustering real-time requirement is determined according to the type, and then the garbage recycling time-consuming threshold (namely T) is determined according to the text clustering real-time requirement.
S84: the strongly referenced memory fraction (i.e., r) is determined to be equal to half of the first ratio recommendation.
S85: the current garbage collection time (i.e., t) is monitored.
S86: and judging whether T is not larger than the time-consuming threshold (namely T) for garbage collection. If T is less than or equal to T, go to step S87; if T > T, the strongly referenced memory fraction (i.e., r) is determined to be equal to half of the current strongly referenced memory fraction, and execution returns to step S85.
S87: and judging whether the difference between T and T is not greater than a preset threshold (marked as C). If T-T is less than or equal to C, outputting the current strong reference memory ratio; if T-T > C, step S88 is performed.
S88: and determining the strong reference memory ratio (namely r) as being equal to the intermediate value of the current strong reference memory ratio and the first ratio suggestion value, and returning to execute the step S85.
As a specific example, when the memory capacity is 8G and the garbage collection time-consuming threshold T is 100ms, first, according to the memory capacity lookup table 1, a corresponding garbage collection time-consuming suggested value is 200ms, and a corresponding first ratio suggested value is 85%. The recommended value of the garbage recycling consumed time is larger than T, so that the value of the strong reference memory occupation ratio r is determined to be half of the first recommended value of the ratio, namely 42.5%. And monitoring the actual time value of garbage recovery in real time, wherein the actual time value of garbage recovery is assumed to be 70ms, the actual time value of garbage recovery is smaller than T, and the difference (namely 30ms) between the actual time value of garbage recovery and T is larger than a preset threshold (the preset threshold is assumed to be 10 ms). Then the value of the strongly referenced memory fraction r is again determined to be equal to the median value, i.e. 63.75%, of the current strongly referenced memory fraction (i.e. 42.5%) and the first ratio recommendation value (i.e. 85%). And after the percentage of the strongly referenced memory is determined to be 63.75%, continuously monitoring the actual time consumption value of garbage recovery in real time, and judging whether the current actual time consumption value of garbage recovery is greater than T. And repeating the steps until the set strong reference memory occupation ratio enables the actual time consumption value of garbage collection to be smaller than T and the difference between the actual time consumption value and the T to be not larger than the preset threshold, finishing the adjustment of the strong reference memory occupation ratio, and outputting the finally determined strong reference memory occupation ratio.
For another example, when the memory capacity is 8G and the garbage collection time-consuming threshold T is 300ms, the corresponding garbage collection time-consuming suggested value is 200ms and the corresponding first ratio suggested value is 85% according to the memory capacity lookup table 1. And (4) the waste recycling time-consuming suggestion value is smaller than T, so that the value of the strong reference memory occupation ratio r is determined to be equal to the first ratio suggestion value, namely equal to 85%, and the strong reference memory occupation ratio is output.
One reason for the above-described manner of setting the strongly-referenced memory ratio is that: the garbage collection time consumption and the strong reference memory ratio are positively correlated, and the larger the strong reference memory ratio is, the longer the garbage collection time consumption is. From the viewpoint of system performance, the shorter the garbage recycling time is, the better the garbage recycling time is; from the perspective of algorithm effect, the larger the strong reference memory occupation ratio, the better. Therefore, the strong reference memory occupation ratio is set to balance the algorithm effect and the system performance essentially. As can be seen from the diagram of the correspondence between the strong reference memory occupation ratio and the garbage collection time shown in fig. 7, in the range where the strong reference memory occupation ratio is smaller than the first ratio suggested value, the garbage collection time increases slowly as the strong reference memory occupation ratio increases; in the range that the strong reference memory proportion exceeds the first proportion recommended value, along with the increase of the strong reference memory proportion, the garbage recycling time consumption is increased sharply; the first proportional suggested value can be considered as a data inflection point.
Therefore, under the condition that the strong reference memory occupation ratio is larger than the first proportion suggestion value, the actual value of the garbage recycling consumption is greatly increased due to the fact that the strong reference memory occupation ratio is increased by a small amount; correspondingly, the system performance will be greatly reduced, obviously irreparable, with the weak improvement of the algorithm effect brought by the small increase of the strong reference memory occupation ratio. Therefore, under the condition that the garbage collection time-consuming suggestion value is not larger than the garbage collection time-consuming threshold, the strong reference memory occupation ratio can be set to be equal to a first proportional suggestion value corresponding to the garbage collection time-consuming suggestion value; when the garbage collection time-consuming suggested value is larger than the garbage collection time-consuming threshold, the strong-reference memory occupation ratio can be set to be smaller than a first proportional suggested value corresponding to the garbage collection time-consuming suggested value, and the real garbage collection time-consuming value monitored in real time is close to the garbage collection time-consuming threshold as much as possible.
The data synchronization module is responsible for ensuring data synchronization of the inverted index and the clustering information, and when data in a soft reference memory area is cleared by a JAVA virtual machine, a callback method is called through a clustering information KEY (KEY) to synchronously clear the data in the inverted index. Specifically, the embodiments of the present disclosure may use a Soft Reference object of the JAVA virtual machine, implement Soft hash mapping (SoftHashMap) by using the Soft Reference object, register the Soft Reference object cleared by the JAVA virtual machine in a data queue, and obtain the cleared KEY by pulling the data queue by the data synchronization module. The implementation scheme of the module is based on HashMap of soft citation, the callback function of the HashMap can be expanded, a callback interface is designed, and the callback interface is called according to KEY when the soft citation object is cleared.
Fig. 9 is a schematic diagram illustrating an implementation manner of performing secondary clustering on obsolete data in the soft-reference memory region by the incremental offline clustering module according to an embodiment of the present disclosure. Because some data in the soft-reference memory area are eliminated, clustering failure may be caused for long-period public opinion text data. In order to solve the problem, as shown in fig. 9, the present disclosure introduces an incremental offline clustering module, records an actual range of eliminated data in the memory through a data synchronization module, and actively triggers the offline clustering module to perform secondary clustering on the eliminated data in a specific time window through the data synchronization module.
Compared with common offline clustering, the method records the time range of the data eliminated by the memory through the data synchronization module, and actively triggers the incremental offline clustering module to perform offline clustering on the data in the specific time range. Because the data does not need to be stored in full, only the part of the data eliminated by the memory needs to be subjected to offline aggregation, the offline clustering module needs less memory, and the problem of long-period text clustering effect is solved.
Fig. 10 schematically shows a flowchart for implementing secondary clustering of obsolete data by the incremental offline clustering module according to an embodiment of the present disclosure, where the flowchart includes:
s101: and the data synchronization module triggers the increment offline clustering module to perform clustering.
S102: and the incremental offline clustering module reads the eliminated data.
S103: when new data is added into the memory, the incremental offline clustering module reads the new data.
S104: and the increment offline clustering module is used for matching and clustering the eliminated data and the newly added data and judging whether the eliminated data and the newly added data are successfully clustered. If yes, executing step S105; otherwise, the process returns to step S103.
S105: and storing the clustered data into a memory, and deleting the information in the incremental offline clustering module.
Because the eliminated data read by the incremental offline clustering module can be deleted, the incremental offline clustering module can only store information which is not successfully clustered, and the memory consumption is greatly reduced.
FIG. 11 schematically shows a flow diagram of data processed by the incremental offline clustering module according to an embodiment of the present disclosure. As shown in fig. 11, the incremental offline clustering module reads the obsolete memory data from the database, and when new data is added to the memory, the incremental offline clustering module reads the new data; and clustering the eliminated memory data and the newly added data, and if the clustering is successful, updating the clustering information into the memory data.
In summary, the present invention provides four core modules, including a soft-reference memory region, a memory management and control module, a data synchronization module, and an incremental offline clustering module.
And the soft reference memory storage area is used for storing the clustering information. The soft reference memory area can be recycled when the memory of the JAVA virtual machine is insufficient, so that the problem of memory overflow can be avoided. Due to the existence of the soft reference memory area, the storage data volume of the clustering information can be automatically matched with the memory size of the JAVA virtual machine, and meanwhile, the risk of memory overflow is avoided, so that the problem of long-period text clustering memory management in the public opinion analysis field is solved.
The memory management and control module is used for reasonably using the strong memory area and the soft memory area. Through statistical data, the proportion of the sizes of the strong reference memory region and the soft reference memory region is rapidly and dynamically adjusted by using a dichotomy, so that the garbage collection time is kept in a reasonable range on the basis of fully utilizing the memory.
And the data synchronization module is used for ensuring the data synchronization of the inverted index and the clustering information, and calling a callback method through the clustering information KEY when the data of the soft reference memory area is cleared by the JAVA virtual machine, so as to synchronously clear the data in the inverted index.
The method also introduces an increment offline clustering module, records the actual range of eliminated data in the memory through a data synchronization module, and actively triggers the offline clustering module to perform secondary clustering on the data of a specific time window. Compared with the common offline clustering, the invention records the data time range eliminated by the memory through the data synchronization module, and actively triggers the incremental offline clustering module to perform mirror image offline clustering on the data in the specific time range. Because the data does not need to be stored in full, only the part of the data eliminated by the memory needs to be subjected to offline aggregation, the offline clustering module needs less memory, and the problem of long-period text clustering effect is solved.
Exemplary Medium
Having described the method of the exemplary embodiment of the present disclosure, the medium of the exemplary embodiment of the present disclosure is explained next with reference to fig. 12.
In some possible embodiments, various aspects of the disclosure may also be implemented as a computer-readable medium on which a program is stored, the program, when executed by a processor, being for implementing the steps in the memory management method according to various exemplary embodiments of the disclosure described in the "exemplary methods" section above in this specification.
Specifically, the processor is configured to implement the following steps when executing the program:
acquiring garbage collection time, wherein the garbage collection time represents the time spent on cleaning garbage objects in a memory;
and adjusting a first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection time consumption and a preset garbage collection time consumption threshold.
It should be noted that: the above-mentioned medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example but not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 12, a medium 120 that can employ a portable compact disc read only memory (CD-ROM) and include a program and can be run on a device according to an embodiment of the present disclosure is described. However, the disclosure is not so limited, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN).
Exemplary devices
Having described the media of the exemplary embodiments of the present disclosure, the apparatus of the exemplary embodiments of the present disclosure is described next with reference to fig. 13.
The present disclosure provides a memory management device, where a memory managed by the device at least includes a first storage space for storing first data and a second storage space for storing second data; as shown in fig. 13, the memory management device according to the embodiment of the present disclosure may include:
the data monitoring and counting module 1310 is configured to obtain garbage collection time, where the garbage collection time represents a time length spent on cleaning a garbage object in a memory;
the dynamic proportion adjustment module 1320 is configured to adjust a first ratio between the capacity of the first storage space and the capacity of the memory according to the garbage collection time consumption and a preset garbage collection time consumption threshold.
In a possible implementation manner, the first storage space is a strong-reference memory space, and the second storage space is a soft-reference memory space.
In a possible embodiment, the above apparatus further comprises:
a data migration module 1330 configured to determine a capacity of the first storage space according to the first ratio; under the condition that the data amount in the memory is larger than the capacity of the first storage space, migrating the first data exceeding the capacity part of the first storage space to a second storage space; or, in the case that the data amount in the first storage space is smaller than the capacity of the first storage space and the second data exists in the second storage space, migrating at least part of the second data in the second storage space to the first storage space.
In one possible implementation, the data migration module 1330 is configured to:
and screening third data to be migrated from the first data based on a preset data migration strategy, and migrating the third data from the first storage space to the second storage space.
In one possible implementation, the data monitoring statistic module 1310 is configured to:
searching a preset first corresponding relation according to the capacity of the memory to obtain a corresponding first ratio suggestion value and a garbage recycling time-consuming suggestion value; the first corresponding relation represents a first ratio suggestion value and a garbage recycling time consumption suggestion value corresponding to different memory capacities; and determining the garbage collection time consumption according to the garbage collection time consumption suggestion value.
In a possible implementation, the dynamic scale adjustment module 1320 is configured to:
and under the condition that the garbage collection time consumption is not greater than the garbage collection time consumption threshold value, adjusting the first ratio to be equal to the first ratio suggestion value.
In a possible implementation, the dynamic scale adjustment module 1320 is configured to:
determining the first ratio as a numerical value smaller than the first ratio suggested value under the condition that the garbage recycling consumed time is larger than a garbage recycling consumed time threshold value;
determining the capacity of the first storage space according to the first ratio, and migrating the first data exceeding the capacity of the first storage space to the second storage space under the condition that the data amount in the memory is larger than the capacity of the first storage space;
acquiring a current garbage recycling time-consuming actual value from the garbage recycling log, reducing the value of the first ratio under the condition that the garbage recycling time-consuming actual value is larger than a garbage recycling time-consuming threshold value, and executing the step of determining the capacity of the first storage space again based on the reduced first ratio; and ending the adjusting process under the condition that the actual value of the garbage recycling consumed time is not greater than the garbage recycling consumed time threshold value.
In a possible implementation, the dynamic ratio adjustment module 1320 determines the first ratio to be half of the first ratio suggestion when determining the first ratio to be a value smaller than the first ratio suggestion;
the dynamic ratio adjustment module 1320 adjusts the first ratio to be half of the original value of the first ratio when the value of the first ratio is decreased.
In a possible implementation manner, the dynamic proportion adjustment module 1320 ends the adjustment process when the actual value of the garbage collection time consumption is not greater than the garbage collection time consumption threshold, and the difference between the actual value of the garbage collection time consumption and the garbage collection time consumption threshold is not greater than the preset threshold.
In a possible implementation, the data monitoring statistic module 1310 is further configured to:
under the condition of different memory capacities, counting different values of the first ratio and corresponding garbage recycling time consumption counting values;
according to the statistical result, establishing a mapping relation between the first ratio and the garbage recycling time consumption statistical value under the condition of different memory capacities;
and determining the first corresponding relation according to the mapping relation.
In a possible implementation manner, the memory further includes an index area for storing indexes of the first data and the second data.
In a possible embodiment, the above apparatus further comprises:
the data synchronization module 1340 is configured to determine second data that is cleared when the second data in the second storage space is cleared; and updating the index according to the cleared second data.
In a possible implementation, the apparatus further includes an incremental offline clustering module 1350 configured to:
storing the second data cleared in the second storage space into a database;
reading the second data which is cleared from the database;
when new data is stored in the memory, clustering the new data and the removed second data, and if clustering is successful, storing the clustered data in the memory;
and updating the index according to the clustered data.
In one possible embodiment, the first data and the second data include public opinion text data;
in a possible embodiment, the above apparatus further comprises:
a memory region clustering module 1360, configured to cluster public opinion text data in a memory;
the analysis module 1370 is configured to perform at least one of propagation path analysis, emotion data analysis, and public opinion trend analysis on the clustered data.
Exemplary computing device
Having described the methods, media, and apparatus of the exemplary embodiments of the present disclosure, a computing device of the exemplary embodiments of the present disclosure is described next with reference to fig. 14.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible implementations, a computing device according to embodiments of the present disclosure may include at least one processing unit and at least one memory unit. Wherein the storage unit stores program code that, when executed by the processing unit, causes the processing unit to perform the steps in the memory management methods according to various exemplary embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
A computing device 140 according to such an embodiment of the present disclosure is described below with reference to fig. 14. The computing device 140 shown in fig. 14 is only one example and should not impose any limitations on the functionality or scope of use of embodiments of the disclosure.
As shown in fig. 14, computing device 140 is in the form of a general purpose computing device. Components of computing device 140 may include, but are not limited to: the at least one processing unit 1401 and the at least one memory unit 1402 are connected to a bus 1403 which connects different system components (including the processing unit 1401 and the memory unit 1402).
The bus 1403 includes a data bus, a control bus, and an address bus.
The storage unit 1402 may include readable media in the form of volatile memory, such as Random Access Memory (RAM)14021 and/or cache memory 14022, and may further include readable media in the form of non-volatile memory, such as Read Only Memory (ROM) 14023.
It should be noted that although in the above detailed description several units/modules or sub-units/sub-modules of the memory management device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module, in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
Further, while the operations of the disclosed methods are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the present disclosure have been described with reference to several particular embodiments, it is to be understood that the present disclosure is not limited to the particular embodiments disclosed, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
Claims (10)
1. A memory management method, wherein the memory at least includes a first storage space for storing first data and a second storage space for storing second data, the method comprising:
acquiring garbage collection time, wherein the garbage collection time represents the time spent on cleaning the garbage objects in the memory;
and adjusting a first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection time consumption and a preset garbage collection time consumption threshold.
2. The method of claim 1, wherein the first storage space is a strongly-referenced memory space and the second storage space is a soft-referenced memory space.
3. The method of claim 1 or 2, further comprising:
determining the capacity of the first storage space according to the first ratio;
under the condition that the data amount in the memory is larger than the capacity of the first storage space, migrating the first data exceeding the capacity part of the first storage space to the second storage space; or,
migrating at least part of second data in the second storage space to the first storage space when the amount of data in the first storage space is less than the capacity of the first storage space and the second data exists in the second storage space.
4. The method of claim 3, wherein the migrating the first data beyond the capacity portion of the first storage space to the second storage space comprises:
and screening third data to be migrated from the first data based on a preset data migration strategy, and migrating the third data from the first storage space to the second storage space.
5. The method of claim 1 or 2, wherein the obtaining the garbage collection time comprises:
searching a preset first corresponding relation according to the capacity of the memory to obtain a corresponding first ratio suggestion value and a garbage recycling time-consuming suggestion value; the first corresponding relation represents a first ratio suggestion value and a garbage recycling time consumption suggestion value corresponding to different memory capacities;
and determining the garbage collection time consumption according to the garbage collection time consumption suggested value.
6. The method according to claim 5, wherein the adjusting the first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection consumed time and a preset garbage collection consumed time threshold comprises:
and when the garbage collection consumed time is not greater than the garbage collection consumed time threshold value, adjusting the first ratio to be equal to the first ratio suggestion value.
7. The method according to claim 5, wherein the adjusting the first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection consumed time and a preset garbage collection consumed time threshold comprises:
determining the first ratio to be a numerical value smaller than the first ratio suggestion value under the condition that the garbage recycling consumed time is larger than the garbage recycling consumed time threshold;
determining the capacity of the first storage space according to the first ratio, and migrating the first data exceeding the capacity of the first storage space to the second storage space under the condition that the data amount in the memory is larger than the capacity of the first storage space;
acquiring a current garbage recycling time-consuming actual value from a garbage recycling log, reducing the value of the first ratio under the condition that the garbage recycling time-consuming actual value is larger than the garbage recycling time-consuming threshold value, and executing the step of determining the capacity of the first storage space again based on the reduced first ratio; and ending the adjusting process under the condition that the actual value of the garbage recycling consumed time is not greater than the garbage recycling consumed time threshold value.
8. A memory management apparatus, wherein the memory at least includes a first storage space for storing first data and a second storage space for storing second data, the apparatus comprising:
the data monitoring and counting module is used for acquiring garbage collection time, and the garbage collection time represents the time spent on cleaning the garbage objects in the memory;
and the proportion dynamic adjustment module is used for adjusting a first ratio of the capacity of the first storage space to the capacity of the memory according to the garbage collection time consumption and a preset garbage collection time consumption threshold.
9. A medium storing a computer program, characterized in that the program, when being executed by a processor, carries out the method according to any one of claims 1-7.
10. A computing device, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110127352.3A CN112783656B (en) | 2021-01-29 | 2021-01-29 | Memory management method, medium, device and computing equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110127352.3A CN112783656B (en) | 2021-01-29 | 2021-01-29 | Memory management method, medium, device and computing equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112783656A true CN112783656A (en) | 2021-05-11 |
CN112783656B CN112783656B (en) | 2024-04-30 |
Family
ID=75759894
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110127352.3A Active CN112783656B (en) | 2021-01-29 | 2021-01-29 | Memory management method, medium, device and computing equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112783656B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115269660A (en) * | 2022-09-26 | 2022-11-01 | 平安银行股份有限公司 | Cache data processing method and device, electronic equipment and storage medium |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130086131A1 (en) * | 2011-10-03 | 2013-04-04 | Oracle International Corporation | Time-based object aging for generational garbage collectors |
US20150026428A1 (en) * | 2013-07-18 | 2015-01-22 | International Business Machines Corporation | Memory use for garbage collected computer environments |
US20150121029A1 (en) * | 2013-10-24 | 2015-04-30 | International Business Machines Corporation | Memory management with priority-based memory reclamation |
WO2015085732A1 (en) * | 2013-12-10 | 2015-06-18 | 中兴通讯股份有限公司 | Terminal memory processing method and apparatus, and terminal |
CN106201904A (en) * | 2016-06-30 | 2016-12-07 | 网易(杭州)网络有限公司 | Method and device for internal memory garbage reclamation |
CN107391774A (en) * | 2017-09-15 | 2017-11-24 | 厦门大学 | The rubbish recovering method of JFS based on data de-duplication |
US20180276117A1 (en) * | 2017-03-21 | 2018-09-27 | Linkedin Corporation | Automated virtual machine performance tuning |
CN109343796A (en) * | 2018-09-21 | 2019-02-15 | 新华三技术有限公司 | A kind of data processing method and device |
US20190065366A1 (en) * | 2017-08-31 | 2019-02-28 | Micron Technology, Inc. | Memory device with dynamic cache management |
US20190073298A1 (en) * | 2017-09-05 | 2019-03-07 | Phison Electronics Corp. | Memory management method, memory control circuit unit and memory storage apparatus |
CN110727605A (en) * | 2019-09-27 | 2020-01-24 | Oppo(重庆)智能科技有限公司 | Memory recovery method and device and electronic equipment |
US20200073797A1 (en) * | 2018-08-29 | 2020-03-05 | International Business Machines Corporation | Maintaining correctness of pointers from a managed heap to off-heap memory |
US20200117640A1 (en) * | 2018-10-12 | 2020-04-16 | EMC IP Holding Company LLC | Method, device and computer program product for managing storage system |
US20200133551A1 (en) * | 2018-10-30 | 2020-04-30 | EMC IP Holding Company LLC | Method, electronic device, and program product for scheduling requests for reclaiming storage space |
CN111221475A (en) * | 2020-01-04 | 2020-06-02 | 苏州浪潮智能科技有限公司 | Storage space management method, device, equipment and readable medium |
CN111352698A (en) * | 2020-02-25 | 2020-06-30 | 北京奇艺世纪科技有限公司 | JVM parameter adjusting method and device |
CN111352593A (en) * | 2020-02-29 | 2020-06-30 | 杭州电子科技大学 | Solid state disk data writing method for distinguishing fast writing from normal writing |
-
2021
- 2021-01-29 CN CN202110127352.3A patent/CN112783656B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130086131A1 (en) * | 2011-10-03 | 2013-04-04 | Oracle International Corporation | Time-based object aging for generational garbage collectors |
US20150026428A1 (en) * | 2013-07-18 | 2015-01-22 | International Business Machines Corporation | Memory use for garbage collected computer environments |
US20150121029A1 (en) * | 2013-10-24 | 2015-04-30 | International Business Machines Corporation | Memory management with priority-based memory reclamation |
WO2015085732A1 (en) * | 2013-12-10 | 2015-06-18 | 中兴通讯股份有限公司 | Terminal memory processing method and apparatus, and terminal |
CN106201904A (en) * | 2016-06-30 | 2016-12-07 | 网易(杭州)网络有限公司 | Method and device for internal memory garbage reclamation |
US20180276117A1 (en) * | 2017-03-21 | 2018-09-27 | Linkedin Corporation | Automated virtual machine performance tuning |
US20190065366A1 (en) * | 2017-08-31 | 2019-02-28 | Micron Technology, Inc. | Memory device with dynamic cache management |
US20190073298A1 (en) * | 2017-09-05 | 2019-03-07 | Phison Electronics Corp. | Memory management method, memory control circuit unit and memory storage apparatus |
CN107391774A (en) * | 2017-09-15 | 2017-11-24 | 厦门大学 | The rubbish recovering method of JFS based on data de-duplication |
US20200073797A1 (en) * | 2018-08-29 | 2020-03-05 | International Business Machines Corporation | Maintaining correctness of pointers from a managed heap to off-heap memory |
CN109343796A (en) * | 2018-09-21 | 2019-02-15 | 新华三技术有限公司 | A kind of data processing method and device |
US20200117640A1 (en) * | 2018-10-12 | 2020-04-16 | EMC IP Holding Company LLC | Method, device and computer program product for managing storage system |
US20200133551A1 (en) * | 2018-10-30 | 2020-04-30 | EMC IP Holding Company LLC | Method, electronic device, and program product for scheduling requests for reclaiming storage space |
CN110727605A (en) * | 2019-09-27 | 2020-01-24 | Oppo(重庆)智能科技有限公司 | Memory recovery method and device and electronic equipment |
CN111221475A (en) * | 2020-01-04 | 2020-06-02 | 苏州浪潮智能科技有限公司 | Storage space management method, device, equipment and readable medium |
CN111352698A (en) * | 2020-02-25 | 2020-06-30 | 北京奇艺世纪科技有限公司 | JVM parameter adjusting method and device |
CN111352593A (en) * | 2020-02-29 | 2020-06-30 | 杭州电子科技大学 | Solid state disk data writing method for distinguishing fast writing from normal writing |
Non-Patent Citations (3)
Title |
---|
NICHOLAS HARVEY-LEES-GREEN; MORTEZA BIGLARI-ABHARI; AVINASH MALIK; ZORAN SALCIC: "《A Dynamic Memory Management Unit for Real Time Systems》", 《2017 IEEE 20TH INTERNATIONAL SYMPOSIUM ON REAL-TIME DISTRIBUTED COMPUTING (ISORC)》, 3 July 2017 (2017-07-03), pages 2375 - 5261 * |
徐正超;喻成;: "Java虚拟机中内存管理机制研究", 中南民族大学学报(自然科学版), no. 03, pages 91 - 95 * |
赵俊先;喻剑;: "基于RDD非序列化本地存储的Spark存储性能优化", 计算机科学, no. 05, 15 May 2019 (2019-05-15), pages 150 - 156 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115269660A (en) * | 2022-09-26 | 2022-11-01 | 平安银行股份有限公司 | Cache data processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112783656B (en) | 2024-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210240636A1 (en) | Memory Management Method and Apparatus | |
CN109857556B (en) | Memory recovery method and device, storage medium and electronic equipment | |
CN102331986B (en) | Database cache management method and database server | |
US20160335177A1 (en) | Cache Management Method and Apparatus | |
CN110209348B (en) | Data storage method and device, electronic equipment and storage medium | |
CN111352861A (en) | Memory compression method and device and electronic equipment | |
CN110554837A (en) | Intelligent switching of fatigue-prone storage media | |
US20210157683A1 (en) | Method, device and computer program product for managing data backup | |
CN110543435A (en) | Mixed mapping operation method, device and equipment of storage unit and storage medium | |
CN112783656B (en) | Memory management method, medium, device and computing equipment | |
CN114996173B (en) | Method and device for managing write operation of storage equipment | |
US11093389B2 (en) | Method, apparatus, and computer program product for managing storage system | |
US20200334142A1 (en) | Quasi-compacting garbage collector for data storage system | |
CN108681469B (en) | Page caching method, device, equipment and storage medium based on Android system | |
CN113609090A (en) | Data storage method and device, computer readable storage medium and electronic equipment | |
CN113742058A (en) | Method and device for managing out-of-heap memory | |
CN110716763B (en) | Automatic optimization method and device for web container, storage medium and electronic equipment | |
CN111858393A (en) | Memory page management method, memory page management device, medium and electronic device | |
US20190114082A1 (en) | Coordination Of Compaction In A Distributed Storage System | |
US11194861B2 (en) | Graph partitioning method and apparatus | |
CN115543859A (en) | Wear leveling optimization method, device, equipment and medium for multi-partition SSD | |
CN116701239A (en) | Fragment recycling management method and device based on partner algorithm and computer equipment | |
CN115390754A (en) | Hard disk management method and device | |
CN112256997A (en) | Page management method, page management device, storage medium and electronic equipment | |
CN108984431B (en) | Method and apparatus for flushing stale caches |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20210928 Address after: 310052 Room 408, building 3, No. 399, Wangshang Road, Changhe street, Binjiang District, Hangzhou City, Zhejiang Province Applicant after: Hangzhou Netease Zhiqi Technology Co.,Ltd. Address before: 310052 Building No. 599, Changhe Street Network Business Road, Binjiang District, Hangzhou City, Zhejiang Province, 4, 7 stories Applicant before: NETEASE (HANGZHOU) NETWORK Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |