[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116069493A - Data processing method, device, equipment and readable storage medium - Google Patents

Data processing method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN116069493A
CN116069493A CN202111293160.6A CN202111293160A CN116069493A CN 116069493 A CN116069493 A CN 116069493A CN 202111293160 A CN202111293160 A CN 202111293160A CN 116069493 A CN116069493 A CN 116069493A
Authority
CN
China
Prior art keywords
memory
domain
block
target
memory domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111293160.6A
Other languages
Chinese (zh)
Inventor
吴晨涛
李颉
过敏意
卢熠辉
郭翰宸
郭振宇
王佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111293160.6A priority Critical patent/CN116069493A/en
Publication of CN116069493A publication Critical patent/CN116069493A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a data processing method, a device, equipment and a readable storage medium, wherein the method comprises the following steps: receiving a first memory allocation request sent by a first process, and acquiring a memory domain set according to the first memory allocation request; each memory domain in the memory domain set consists of one or more memory blocks; determining a memory domain to be allocated corresponding to a first process according to a memory domain set, determining a first target memory block corresponding to the first process in one or more memory blocks included in the memory domain to be allocated according to a first memory allocation request, and allocating the first target memory block to the first process; updating the memory domain to be allocated in the memory domain set into an updated memory domain according to the first target memory block; the memory domain set including the updated memory domain is configured to allocate a second target memory block for the second process when a second memory allocation request of the second process is received. By adopting the method and the device, the memory allocation efficiency of the system can be improved, and the resource utilization rate is improved.

Description

Data processing method, device, equipment and readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data processing method, apparatus, device, and readable storage medium.
Background
In an operating system (such as a Linux operating system), management and allocation of memory are important factors that affect stable operation of the system. The main function of the memory management is to organize idle memory blocks in the system and respond to memory allocation and reclamation requests of the application.
Currently, the partner management algorithm is a main memory management method, which is designed for a small-sized single-core operating system, and all memory requests need to be executed in series based on the memory management method of the partner algorithm. However, since it is more specific to a small-sized single-core operating system, in a multi-core operating system, the partner management algorithm cannot respond to the memory request in real time, and once there are multiple processes that all send the memory request, the partner management algorithm may cause serious memory allocation delay, which affects the memory allocation efficiency very much.
Disclosure of Invention
The embodiment of the application provides a data processing method, a device, equipment and a readable storage medium, which can improve the memory allocation efficiency of a system and the resource utilization rate.
In one aspect, an embodiment of the present application provides a data processing method, including:
receiving a first memory allocation request sent by a first process, and acquiring a memory domain set according to the first memory allocation request; each memory domain in the memory domain set consists of one or more memory blocks;
Determining a memory domain to be allocated corresponding to a first process according to a memory domain set, determining a first target memory block corresponding to the first process in one or more memory blocks included in the memory domain to be allocated according to a first memory allocation request, and allocating the first target memory block to the first process;
updating the memory domain to be allocated in the memory domain set into an updated memory domain according to the first target memory block; the memory domain set including the updated memory domain is configured to allocate a second target memory block for the second process when a second memory allocation request of the second process is received.
An aspect of an embodiment of the present application provides a data processing apparatus, including:
the request processing module is used for receiving a first memory allocation request sent by a first process and acquiring a memory domain set according to the first memory allocation request; each memory domain in the memory domain set consists of one or more memory blocks;
the memory domain determining module is used for determining a memory domain to be allocated corresponding to the first process according to the memory domain set;
the memory block allocation module is used for determining a first target memory block corresponding to the first process in one or more memory blocks included in the memory domain to be allocated according to the first memory allocation request, and allocating the first target memory block to the first process;
The memory domain updating module is used for updating the memory domain to be allocated in the memory domain set into an updated memory domain according to the first target memory block; the memory domain set including the updated memory domain is configured to allocate a second target memory block for the second process when a second memory allocation request of the second process is received.
In one embodiment, the memory domain determination module includes:
the first domain detection unit is used for detecting the memory domains in the memory domain set;
the first domain obtaining unit is used for obtaining a memory domain to be allocated corresponding to the first process in the memory domain to be allocated if the memory domain set is detected to have the memory domain to be allocated, and switching the memory domain to be allocated from an idle state to a locking state; each memory block in the allocatable memory domain is in an idle state; there is no memory block in the idle state in the memory domain to be allocated in the locked state.
In one embodiment, the memory domain determination module includes:
the second domain detection unit is used for detecting the memory domains in the memory domain set;
the second domain obtaining unit is used for setting the first process into a waiting state if the fact that the memory domain set does not exist in the memory domain set is detected, switching the first process into an allocation state from the waiting state when the fact that the memory domain set reappears the memory domain set is detected, obtaining the memory domain to be allocated corresponding to the first process in the allocation state from the memory domain set, and switching the memory domain to be allocated into a locking state from the idle state; each memory block in the allocatable memory domain is in an idle state; there is no memory block in the idle state in the memory domain to be allocated in the locked state.
In one embodiment, the memory domain update module includes:
the block determining unit is used for taking the memory blocks except the first target memory block in the memory blocks to be allocated as first residual memory blocks;
a domain composition unit, configured to update the memory domain according to the first remaining memory block composition; the updated memory domain in the memory domain set belongs to the allocatable memory domain.
In one embodiment, the memory domain update module includes:
the domain dividing unit is used for dividing the memory domain to be allocated according to the first target memory block to obtain a first divided memory domain and a second divided memory domain;
an update domain determining unit, configured to determine the first divided memory domain and the second divided memory domain as update memory domains; the updated memory domain in the memory domain set belongs to the allocatable memory domain.
In one embodiment, the domain dividing unit includes:
the to-be-divided block determining subunit is used for determining to-be-divided memory blocks in the to-be-divided memory domain according to the first target memory block; the number of the memory blocks to be divided is larger than that of the first target memory blocks;
a difference block obtaining subunit, configured to obtain a difference memory block between the memory block to be divided and the first target memory block;
The dividing domain composing subunit is used for composing a first dividing memory domain according to the difference memory blocks and composing a second dividing memory domain according to the second remaining memory blocks; the second remaining memory blocks refer to memory blocks in the memory domain to be allocated other than the memory blocks to be partitioned.
In one embodiment, the memory domain determining module further comprises:
the domain number determining unit is used for obtaining the residual memory domains except the memory domain to be allocated in the allocatable memory domains and the domain number of the residual memory domains;
the domain merging unit is used for carrying out domain merging on the residual memory domains when the number of the domains is larger than the number threshold value to obtain merged memory domains; the combined memory domain in the memory domain set belongs to the allocatable memory domain.
In one embodiment, the data processing apparatus further comprises:
the state detection module is used for receiving a memory release request for a first target memory block sent by a first process;
the state detection module is also used for detecting the storage state of the memory buffer area according to the memory release request;
the first memory block release module is used for adding the first target memory block into the memory buffer area if the storage state of the memory buffer area is the first storage state;
The first memory block release module is further configured to release the first target memory block in the memory buffer into the memory domain set when the memory block release condition is satisfied;
the second memory block releasing module is used for adjusting the storage capacity of the memory buffer area according to the first target memory block if the storage state of the memory buffer area is the second storage state, so as to obtain an adjusted memory buffer area;
the second memory block releasing module is further configured to add the first target memory block to the adjusted memory buffer, and release the first target memory block in the adjusted memory buffer to the memory domain set when the memory block releasing condition is satisfied.
In one embodiment, the status detection module includes:
the process information acquisition unit is used for acquiring the historical memory release behavior information of the first process and the virtual address space;
the predicted quantity determining unit is used for determining the predicted memory release request quantity of the first process in the target time period according to the historical memory release behavior information and the virtual address space;
the state detection unit is used for obtaining the target block number of the first target memory block and the predicted block number corresponding to the predicted memory release request amount, and detecting the storage state of the memory buffer area according to the target block number and the predicted block number.
In one embodiment, the state detection unit includes:
an operation subunit, configured to perform operation processing on the number of target blocks and the number of predicted blocks to obtain a total number;
a capacity acquisition subunit, configured to acquire a remaining storage capacity of the memory buffer area;
a state determining subunit, configured to determine, if the remaining storage capacity of the memory buffer area is greater than the total number, the storage state of the memory buffer area as a first storage state;
the state determining subunit is further configured to determine the storage state of the memory buffer as the second storage state if the remaining storage capacity of the memory buffer is less than the total number.
In one embodiment, the first memory block release module includes:
the stored block obtaining unit is used for obtaining the stored memory blocks included in the memory buffer area when the memory block release time is reached; the stored memory blocks include a first target memory block;
the memory block allocation unit is used for allocating the memory blocks in the memory block set according to the memory block allocation time; each memory block in the allocatable memory domain is in an idle state;
and the block adding unit is used for releasing the stored memory block into the memory domain to be added.
In one embodiment, the second memory block release module includes:
the quantity determining unit is used for obtaining the predicted memory release request quantity of the first process in the target time period and the predicted block quantity corresponding to the predicted memory release request quantity;
the number determining unit is further used for obtaining the number of target blocks of the first target memory block and determining the total number of the target blocks corresponding to the number of the predicted blocks;
the capacity adjusting unit is used for acquiring the residual storage capacity corresponding to the memory buffer area, and adjusting the storage capacity of the memory buffer area according to the residual storage capacity and the total number to obtain an adjusted memory buffer area; the updated remaining storage capacity of the adjusted memory buffer is greater than the total number.
In one aspect, a computer device is provided, including: a processor and a memory;
the memory stores a computer program that, when executed by the processor, causes the processor to perform the methods of embodiments of the present application.
In one aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, where the computer program includes program instructions that, when executed by a processor, perform a method in an embodiment of the present application.
In one aspect of the present application, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in an aspect of the embodiments of the present application.
In an embodiment of the present application, a memory domain set (including one or more memory domains, where each memory domain is made up of one or more memory blocks) may be provided, and when a concurrent memory allocation request of a process is received, one or more memory domains included in the memory domain set may concurrently respond to the memory allocation request of the process. For example, when a first memory allocation request of a first process is received, the first process does not need to wait for a history process (such as a process that issues a memory allocation request prior to the first process) to complete the request, but can determine a memory domain to be allocated corresponding to the first process quickly and efficiently directly according to a plurality of memory domains in the memory domain set, and allocate a first target memory block for the first process by the memory domain to be allocated. It can be understood that by combining different memory blocks into different memory domains, memory fragmentation can be reduced, so that the memory resource utilization rate can be improved; meanwhile, in a multi-process concurrent memory request scene, the memory allocation delay is reduced, and the memory request response efficiency is improved. In addition, after the first target memory block is allocated for the first process, the memory domain to be allocated can be updated in time according to the first target memory block, namely the memory domain to be allocated can be updated into an updated memory domain, so that the memory domain in the memory domain set can be dynamically updated and adjusted in real time according to the allocation condition of the memory block, the memory block can be allocated for the subsequent process more accurately and timely according to the memory allocation request (such as the second memory allocation request of the second process) of the subsequent process, and the response efficiency of the memory request is further improved. In conclusion, the memory allocation efficiency of the system can be improved, and the resource utilization rate is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a network architecture diagram provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart of a data processing method according to an embodiment of the present application;
fig. 3a is a schematic diagram of providing services in a high concurrency scenario in a memory domain according to an embodiment of the present application;
fig. 3b is a schematic diagram of providing services in a low concurrency scenario in a memory domain according to an embodiment of the present application;
fig. 4a is a schematic view of a scenario in which a memory domain is split according to an embodiment of the present application;
fig. 4b is a schematic view of a scenario in which memory domains are merged according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of a data processing method according to an embodiment of the present application;
FIG. 6 is a system block diagram provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a data processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
Referring to fig. 1, fig. 1 is a network architecture diagram provided in an embodiment of the present application. As shown in fig. 1, the network architecture may include a service server 1000 and a terminal device cluster, which may include one or more terminal devices, the number of which will not be limited here. As shown in fig. 1, the plurality of terminal devices may include a terminal device 100a, a terminal device 100b, terminal devices 100c, …, a terminal device 100n; as shown in fig. 1, the terminal devices 100a, 100b, 100c, …, 100n may respectively perform network connection with the service server 1000, so that each terminal device may perform data interaction with the service server 1000 through the network connection.
Each terminal device may be installed with a target application that, when running in the respective terminal device, may interact with the service server 1000 shown in fig. 1 described above. The target application may include an application having a function of displaying data information such as text, image, audio, and video. For example, the application may be an entertainment-type application (e.g., a gaming application, a video application, etc.), may be used for a user to upload data, view data, etc. (e.g., view video); the application may also be a shopping-type application (e.g., an e-commerce application) that may be used for a user to upload items, purchase items, and the like. Of course, the application may be other applications having a function of displaying data information, which is not illustrated here. It will be appreciated that when the terminal device runs a certain application, the service server 1000 may provide services for the application, and when the application at the terminal device is started to run, the corresponding central processing unit (Central Processing Unit, CPU) will also run correspondingly, and after the CPU in the running state requests the memory, the CPU in the running state will provide corresponding computing power service for the application of the terminal device.
It can be understood that, in order to improve the response efficiency of the memory allocation request for the CPU, so that each CPU can be allocated to the memory timely and efficiently, the present application proposes a data processing method (i.e. a memory management method), which can form a plurality of different memory domains (i.e. a memory domain is formed by one or more memory blocks) from memory blocks in an idle state (which can be understood as an unoccupied or used state) in the system, where the plurality of different memory domains can form a memory domain set. When a plurality of CPUs run concurrently and memory allocation requests are generated concurrently, a certain memory domain can be determined for each CPU through the memory domain set to allocate memory blocks for the CPUs. For a specific implementation manner of dividing the memory domain to obtain a memory domain set and allocating memory for different CPUs according to the memory domain set, refer to the description in the embodiment corresponding to fig. 2.
The embodiment of the application can select one terminal device from a plurality of terminal devices as a target terminal device, and the terminal device can include: smart phones, tablet computers, notebook computers, desktop computers, smart watches, smart car terminals, smart appliances (e.g., smart televisions, smart speakers, etc.), smart voice interaction devices, etc., carry smart terminals with multimedia data processing functions (e.g., video data playing functions, music data playing functions), but are not limited thereto. For example, the embodiment of the present application may use the terminal device 100a shown in fig. 1 as the target terminal device, where the target terminal device may be integrated with the target application, and at this time, the target terminal device may perform data interaction between the target application and the service server 1000.
It is understood that the method provided in the embodiments of the present application may be performed by a computer device, including but not limited to a terminal device or a service server. The service server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligence platforms.
The terminal device and the service server may be directly or indirectly connected through wired or wireless communication, which is not limited herein.
Alternatively, it is understood that the computer device (e.g., the service server 1000, the terminal device 100a, the terminal device 100b, etc.) may be a node in a distributed system, where the distributed system may be a blockchain system, and the blockchain system may be a distributed system formed by connecting the plurality of nodes through a network communication. Among them, a Peer-To-Peer (P2P) network may be formed between nodes, and the P2P protocol is an application layer protocol running on top of a transmission control protocol (TCP, transmission Control Protocol) protocol. In a distributed system, any form of computer device, such as a service server, terminal device, etc., can become a node in the blockchain system by joining the point-to-point network. For ease of understanding, the concept of blockchain will be described as follows: the block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like, and is mainly used for sorting data according to time sequence, encrypting the data into an account book, preventing the account book from being tampered and forged, and simultaneously verifying, storing and updating the data. When the computer equipment is a blockchain node, due to the characteristics of the blockchain, such as untampered property and anti-counterfeiting property, the data (such as the running related data of the application program) in the application can have authenticity and security, so that the obtained result is more reliable after the related data processing is performed based on the data.
Further, for ease of understanding, please refer to fig. 2, fig. 2 is a flow chart of a data processing method according to an embodiment of the present application. The method may be performed by a terminal device (e.g., any terminal device in the terminal device cluster shown in fig. 1, such as the terminal device 100 a), or may be performed by a service server (e.g., the service server 1000 in the embodiment corresponding to fig. 1), or may be performed by the terminal device and the service server together. For easy understanding, this embodiment will be described by taking this method as an example by the service server described above. The data processing method at least comprises the following steps of S101-S103:
step S101, a first memory allocation request sent by a first process is received, and a memory domain set is obtained according to the first memory allocation request; each memory domain in the memory domain set is composed of one or more memory blocks.
In this application, a process may refer to a running activity of a program with an independent function with respect to a certain data set, where each process may apply for and own a system resource (such as a memory resource). In general, an application program may correspond to a process, for example, when an application installed on a terminal device is started to run, a corresponding process may be generated, and the process may apply for system resources to a system, where the system may allocate a different memory space for each process during running. It should be understood that, the first process in the present application may refer to any process, and for a multi-core operating system (multi-CPU operating system), multiple processes may be calculated in parallel in multiple CPUs, where it may be understood that one process corresponds to one CPU (i.e., one process performs calculation in one CPU, and the CPU requests a memory resource for its corresponding process).
It can be appreciated that, for the memory allocation request sent by the first process may be referred to as a first memory allocation request, and the service server may obtain the memory domain set after receiving the first memory allocation request. The memory domain set may be composed of one or more memory domains, each of which is composed of one or more memory blocks. The memory blocks of the memory domain herein may refer to memory blocks in an idle state (unoccupied or used state) in the system. It should be understood that in an operating system, physical memory may be partitioned into blocks of a certain size (e.g., 4K size), and a block of 4K size is called a minimum management unit, and may be referred to as a page frame, where a memory block may refer to a page frame, and one or more free page frames may form a memory domain.
It can be understood that, in the present application, the memory domain set may be adjusted and updated in real time according to the concurrency condition of the memory allocation request of the process (that is, the updated memory domain set is adjusted by adjusting the memory domain), and then the memory domain set at the moment when the first process issues the first memory allocation request may refer to the set obtained by dynamically updating and adjusting according to the concurrency condition (including the high concurrency condition and the low concurrency condition) of the history process (that is, the process that issues the memory allocation request prior to the first process). For a specific implementation of dynamically adjusting the memory domain, reference may be made to the description of the corresponding embodiment of fig. 2.
Step S102, determining a memory domain to be allocated corresponding to a first process according to the memory domain set, determining a first target memory block corresponding to the first process in one or more memory blocks included in the memory domain to be allocated according to the first memory allocation request, and allocating the first target memory block to the first process.
In the application, in the obtained memory domain set, the current memory domain set can be detected, and the memory domain to be allocated of the first process is determined according to the detection result. In one embodiment, the specific implementation manner of determining, according to the memory domain set, the memory domain to be allocated corresponding to the first process may be: the memory domain in the memory domain set can be detected; if the memory domain set is detected to have the allocatable memory domain, the memory domain to be allocated corresponding to the first process can be obtained in the allocatable memory domain, and the memory domain to be allocated is switched from an idle state to a locking state; wherein each memory block in the allocatable memory domain is in an idle state; there is no memory block in the idle state in the memory domain to be allocated in the locked state.
It will be appreciated that when a CPU (which may correspond to a process) issues a memory allocation request, a certain memory domain may serve it, a memory domain in a serving state may not serve other CPUs, may be considered as locked, may be set to a locked state, each memory block in a memory domain in a locked state may no longer be allocated to other CPUs, and herein the state of a memory block capable of being allocated to other CPUs may be referred to as an idle state, and then when a memory block cannot be allocated to other CPUs, each memory block in the memory domain may be considered as being in a locked state (i.e., there is no memory block capable of being allocated to other CPUs in an idle state). It should be appreciated that when a certain memory domain is not serving any CPU, the memory domain waiting to serve a certain CPU may be referred to as an allocatable memory domain, and each memory block in the memory domain waiting to serve may be allocated to any CPU that issues a memory allocation request, that is, each memory block in the memory domain is in an idle state.
It should be understood that, when the first memory allocation request sent by the first process is received, if it is detected that there are allocable memory domains in the memory domain set, a certain memory domain may be directly obtained in the allocable memory domains, which is used as a memory domain to be allocated for the first process, where the memory domain to be allocated provides services for the first process. And when the memory domain to be allocated is to provide service for the first process, the state of the memory domain to be allocated can be switched from the idle state to the locking state.
In a possible embodiment, the specific implementation manner of determining, according to the memory domain set, the memory domain to be allocated corresponding to the first process may further be: the memory domain in the memory domain set can be detected; if the fact that the memory domain set does not have the allocatable memory domain is detected, the first process can be set to be in a waiting state, when the fact that the allocatable memory domain in the memory domain set reappears is detected, the first process can be switched to an allocation state from the waiting state, the memory domain to be allocated corresponding to the first process in the allocation state is obtained from the allocatable memory domain, and the memory domain to be allocated is switched to a locking state from the idle state; each memory block in the allocatable memory domain is in an idle state; there is no memory block in the idle state in the memory domain to be allocated in the locked state.
It should be understood that, when the first memory allocation request sent by the first process is received, if it is detected that no allocable memory domain exists in the current memory domain set, it may be indicated that all the currently existing memory domains are serving different CPUs and no memory domain capable of serving the first process is available, and then the service server may set the first process to a waiting state at this time, so as to force the first process to wait. During the waiting process of the first process, when a Certain Process (CPU) completes a memory request (which can be understood as after the process has been allocated with a memory block), the memory domain providing the service for the process is released, and the released memory domain can provide the corresponding service (be in an idle state again), so that the released memory domain can be understood as the re-appearing allocatable memory domain in the memory domain set. At this time, the first process may obtain the memory domain to be allocated of the first process from the released memory domain without waiting, and the memory domain to be allocated provides services for the first process. And when the memory domain to be allocated provides service for the first process, the state of the memory domain to be allocated can be switched from the idle state to the locking state.
Further, it may be understood that after the memory domain to be allocated corresponding to the first process is obtained, a first target memory block corresponding to the first process may be obtained from one or more memory blocks included in the memory domain to be allocated, and the first target memory block is allocated to the first process. It is understood that, when each process sends a memory allocation request, a block request amount (which may be referred to as a memory request amount or a memory block request amount) for a memory block may be sent together, and then the service server may allocate memory for the process based on the memory block request amount. For example, in the first memory allocation request of the first process, the memory block request amount 8 may be carried, and then the service server may obtain 8 memory blocks in the memory domain to be allocated, as a first target memory block, and allocate the first target memory block to the first process.
The method for the service server to acquire the first target memory block in the memory block to be allocated may be random acquisition, or may be acquisition according to the identification sequence of the memory block (for example, acquisition according to the sequence number of the memory block in order from small to large). The method for acquiring the target memory block of the process in the memory domain may be other manners, and the application is not limited.
Step S103, according to the first target memory block, updating the memory domain to be allocated in the memory domain set into an updated memory domain; the memory domain set including the updated memory domain is configured to allocate a second target memory block for the second process when a second memory allocation request of the second process is received.
In the present application, the memory domain to be allocated in the memory domain set may be updated to an updated memory domain according to the first target memory block.
It can be understood that when a first memory allocation request sent by a first process is received, if an allocatable memory domain exists in the memory domain set, a certain memory domain can be directly obtained in the allocatable memory domains, and is used as a memory domain to be allocated of the first process, after the memory domain to be allocated obtains a first target memory block and allocates the first target memory block to the first process, memory blocks except the first target memory block in the memory domain to be allocated can be used as first residual memory blocks; then, the updated memory domain may be formed according to the first remaining memory block; wherein the updated memory domain in the memory domain set belongs to the allocatable memory domain. It can be appreciated that when the memory domain to be allocated provides services for the first process, the memory domain to be allocated is in a locked state, and when the first process completes the memory request (i.e., allocates the first target memory block to the first process), the first process can release the memory domain to be allocated. In fact, after the first target memory block in the memory domain to be allocated is allocated to the first process, the remaining memory blocks in the memory domain to be allocated may constitute a new memory domain (referred to as an update memory domain), and the actual release of the first integration may be understood as the update memory domain, where the update memory domain is in an idle state after being released, and each memory block in the update memory domain is in an idle state, that is, the update memory domain is an allocable memory domain.
It can be understood that, when the first memory allocation request sent by the first process is received, if it is detected that no allocable memory domain exists in the current memory domain set, the service server may set the first process to a waiting state, so as to force the first process to wait. During the waiting process of the first process, when a certain history process (CPU) completes a memory request (which can be understood as after the process has been allocated with a memory block), the memory domain providing the service for the process is released, and the released memory domain can provide the corresponding service (be in an idle state again), so that the released memory domain can be understood as the re-appearing allocatable memory domain in the memory domain set. At this time, the first process may obtain the memory domain to be allocated of the first process from the released memory domain without waiting, and then, according to the first target memory block, the memory domain to be allocated may be subjected to domain division to obtain a first divided memory domain and a second divided memory domain; the first partitioned memory domain and the second partitioned memory domain may be determined as updated memory domains; wherein the updated memory domain in the memory domain set belongs to the allocatable memory domain.
The specific implementation manner of performing domain division on the memory domain to be allocated according to the first target memory block to obtain the first divided memory domain and the second divided memory domain may be: determining memory blocks to be divided in the memory domain to be allocated according to the first target memory block; wherein the number of the memory blocks to be divided is greater than the number of the first target memory blocks; then, a difference memory block between the memory block to be divided and the first target memory block can be obtained; the first divided memory domain can be formed according to the difference memory blocks, and the second divided memory domain can be formed according to the second remaining memory blocks; the second remaining memory blocks refer to memory blocks in the memory to be allocated except for the memory blocks to be partitioned.
It should be understood that when the first memory allocation request sent by the first process is received, if it is detected that no allocable memory domain exists in the current memory domain set, the memory domain can be considered to be providing services for different memory allocation requests, but at this time, the memory allocation request of the first process is received again, so that the memory allocation request is considered to be in a high concurrency memory request scene, in order to meet the concurrency request of the system, the allocation efficiency is reduced, and the memory domain can be split, that is, the memory domain is fragmented, so that multiple CPUs can be simultaneously served by different memory domains, and concurrency of memory request processing is realized. When the concurrency requirement in the system is reduced, because the memory domains are continuously split, a plurality of memory domains may not be applied, and in order to avoid resource fragmentation, the memory domains can be combined to realize resource integration.
One specific way to merge memory domains may be: the method comprises the steps of obtaining the residual memory domains except the memory domain to be allocated in the allocatable memory domains and the domain number of the residual memory domains; when the number of the domains is larger than the number threshold, carrying out domain merging on the residual memory domains to obtain merged memory domains; the combined memory domain in the memory domain set belongs to the allocatable memory domain.
That is, after determining the memory domain to be allocated in the first process, the number of memory domains in an idle state in the current memory domain set (that is, the number of domains in the current allocable memory domain, the number of remaining memory domains except for the memory domain to be allocated) may be obtained, and when the number of domains is greater than the number threshold, the remaining memory domains may be domain-merged. Wherein the number threshold may be manually specified, for example, the number threshold may be 2, 3, 4, …, which will not be illustrated here. It should be noted that, the merging mode of the remaining memory domains may be to directly merge all the remaining memory domains, or may be to select a portion of the memory domains (e.g., two memory domains) in the remaining memory domains for merging. The merging mode of the memory domain is not limited in this application.
It should be understood that after the to-be-allocated memory domain is updated to the updated memory domain, when the second memory allocation request sent by the second process is received, the to-be-allocated memory domain corresponding to the second process may be continuously determined based on the memory domain set including the updated memory domain, and the second target memory block is obtained in the to-be-allocated memory domain of the second process, and is allocated to the second process. The specific implementation process may refer to a description of allocating the first target memory block to the first process, which will not be described herein. It should be noted that, when the sending time of the memory allocation request of the second process may be in a state that the memory domain set is not updated (that is, when the memory domain set does not contain an updated memory domain), that is, after determining, in the current memory domain set, a memory domain to be allocated corresponding to the first process, the memory domain set is not updated yet, and at this time, when the memory allocation request of the second process is received, the memory domain to be allocated of the second process may be determined from the current memory domain set.
In this embodiment of the present application, the memory domain may be dynamically adjusted according to the concurrency requirement of the process, for example, the memory domain may be divided under the condition of high concurrency memory request, so that multiple memory domains may be provided to simultaneously respond to the memory request requirement of multiple processes, for example, when a first memory allocation request of a first process is received, the first process does not need to queue a history process (such as a process that sends the memory allocation request prior to the first process) to complete the request, but may directly determine, according to multiple memory domains in the memory domain set, a memory domain to be allocated corresponding to the first process, and allocate a first target memory block for the first process by the memory domain to be allocated. Therefore, the memory allocation time delay can be reduced, and the response efficiency of the memory request can be improved. And under the condition of low concurrent memory requests, the memory domains can be combined and integrated, and the fragmented memory domains are integrated into a large resource pool, so that the fragmentation is reduced. In summary, the method and the device can dynamically expand or merge the memory domains according to the concurrency condition of the system, can provide a plurality of memory domains to simultaneously respond to the multi-process memory allocation request under the high concurrency condition, reduce the response delay of the request and improve the memory allocation efficiency; and the memory domains can be integrated under the condition of low concurrency, so that fragmentation is reduced, and the resource utilization rate is improved.
Further, for easy understanding of the method for dynamically adjusting the memory domain, please refer to fig. 3a, fig. 3a is a schematic diagram of providing services in a high concurrency scenario for the memory domain provided in the embodiment of the present application. As shown in fig. 3a, when the system generates a high concurrency memory request demand, satisfying the concurrency of the system is a major concern, and the memory can be dynamically split into memory domain 0, memory domain 1 and memory domain 2. The memory field 0 may be composed of a physical page frame (i.e. an idle memory block) 0 and a physical page frame 3, the memory field 1 may be composed of a physical page frame 1 and a physical page frame 5, and the memory field 2 may be composed of a physical page frame 2 and a physical page frame 4. At this time, a plurality of CPUs may be simultaneously served by different memory domains, for example, as shown in fig. 3a, CPU0 may be served by memory domain 0, CPU1 may be served by memory domain 1, and CPU2 may be served by memory domain 2.
In the current situation, aggregation cannot be performed among all the memory domains, and cross-domain aggregation cannot be performed. It should be understood that when the system is in a high concurrency internal part and can only request the demand, a plurality of memory domains can be provided for simultaneously providing services for different CPUs through the dynamic splitting of the memory domains, so that the CPU requests can be processed concurrently, the response time delay of the memory can be reduced, and the memory allocation efficiency is improved.
Similarly, in order to facilitate understanding of the method for dynamically adjusting the memory domain, please refer to fig. 3b, fig. 3b is a schematic diagram of providing services in a low concurrency scenario for the memory domain provided in the embodiment of the present application. As shown in fig. 3b, when concurrent memory requests of the system are reduced (for example, as shown in fig. 3b, only CPU0 exists in CPU0, CPU1 and CPU2, and no memory allocation request is sent by CPU1 and CPU 2), the reduction of fragmentation is mainly focused, and it is not necessary to divide a plurality of memory domains, and at this time, the plurality of memory domains may be combined. For example, as shown in fig. 3b, physical page frames 1 and 3 from memory domain 0, and physical page frames 1 and 5 from memory domain 1, and physical page frames 2 and 4 from memory domain 2 may be merged and integrated, so that a new memory domain (e.g., memory domain 3 shown in fig. 3 b) may be obtained.
In the current situation, aggregation can be performed between the memory domains, and cross-domain aggregation can be performed. It can be understood that the page frames distributed in a plurality of memory areas can be merged into a higher-order page through merging, so that resource fragmentation can be reduced, and the resource utilization rate can be improved.
For further understanding of the dynamic adjustment process of the memory domain, please refer to fig. 4a, fig. 4a is a schematic view of a scenario for splitting the memory domain according to an embodiment of the present application. As shown in fig. 4a, the process of splitting the memory domain may be divided into a concurrency detection phase, a resource partitioning phase, and a memory domain expansion phase. For ease of understanding, these three stages will be described below.
The concurrency detection phase as shown in fig. 4a may detect concurrency request problems in the system. For example, as shown in fig. 4a, CPU0 may correspond to process 0, where the current memory domain set includes only memory domain 0 (i.e., the current memory domain set is composed of one memory domain 0), and the memory domain 0 may be composed of all idle memory blocks (all idle page frames) in the current system, where the memory domain 0 is providing services to CPU0, that is, where CPU0 is requesting that memory domain 0 allocate memory blocks for it, and where memory domain 0 cannot provide services to other CPUs (that is, where memory domain 0 is not in an idle state and does not belong to an allocatable memory domain). At this time, when the CPU1 (may correspond to the process 1) issues a memory allocation request, since the unique memory domain 0 is providing services for the CPU0, the service server cannot allocate a memory block for the CPU1 due to insufficient system resources, and the service server may allocate a waiting space for the CPU1 at this time, where the CPU1 is in a waiting state, and waits for the service server to allocate a memory block for the CPU 1. It can be understood that when multiple CPUs (e.g., CPU0 and CPU 1) issue memory requests concurrently, the memory domain cannot provide services for the multiple CPUs concurrently, and the response delay of the requests is large, which seriously affects the memory allocation efficiency. In order to reduce the response time delay of the memory request and improve the memory allocation efficiency, the embodiment of the application can divide the system resources (namely the memory domains), namely the memory domains, and expand the number of the memory domains so as to provide services for a plurality of CPUs simultaneously.
As shown in fig. 4a, in the resource partition stage, after the CPU0 completes the memory request (it can be understood that after the CPU0 has been allocated with the memory block), the CPU0 releases the memory domain 0. It may be understood that, the memory allocation requests sent by each CPU may carry the number of requests for memory blocks (which may be referred to as the memory block request amount), and when the memory domain 0 provides services for the CPU0, the service server may allocate a corresponding memory block for the CPU0 based on the memory block request amount carried in the memory allocation request of the CPU0, that is, the service server may obtain the corresponding memory block from the memory domain 0 and allocate the corresponding memory block to the CPU0. When CPU0 releases memory domain 0, then, in practice, the memory blocks included in memory domain 0 do not include the memory blocks already allocated to CPU0, and for ease of distinction, the memory domain released by CPU0 may be referred to herein as memory domain 0'. Further, after the memory domain 0 'is released, the service server may detect the existence of the waiting space at this time, and the service server may perform domain division on the memory domain 0'.
The specific process of domain division of the memory domain 0' may be: firstly, a service server can acquire a memory block request amount carried in a memory allocation request of a CPU1, the service server can acquire a memory block larger than the memory block request amount in a memory domain 0', the memory block to be divided can form a memory domain 1 as a memory block to be divided, the memory domain 1 can serve as a memory domain to be allocated of the CPU1 to provide service for the CPU1 (for example, acquire a target memory block from the memory domain 1 to be allocated to the CPU 1); meanwhile, the remaining memory blocks in the memory domain 0 '(i.e., the memory blocks except the memory block to be divided) may form a new memory domain (e.g., the memory domain 0″ shown in fig. 4 a), so that the memory domain 0″ and the memory domain 1 may be obtained by performing domain division on the memory domain 0', and the memory domain set may include the memory domain 0″ and the memory domain 1.
Further, as shown in fig. 4a, in the memory domain expansion stage, after the CPU1 completes the request (it can be understood that after the CPU1 has been allocated with a memory block), the CPU1 releases the memory domain 1. It can be understood that, the memory allocation requests sent by each CPU may carry the number of requests for memory blocks (which may be referred to as the memory block request amount), and when the memory domain 1 provides services for the CPU1, the service server may allocate a corresponding memory block for the CPU1 based on the memory block request amount carried in the memory allocation request of the CPU1, that is, the service server may obtain the corresponding memory block from the memory domain 1 and allocate the corresponding memory block to the CPU1. When the CPU1 releases the memory domain 1, the memory blocks included in the memory domain 1 do not include the memory blocks already allocated to the CPU1 in practice, and the memory domain released by the CPU1 may be referred to as the memory domain 1' here for convenience of distinction. Then, at this time, after the CPU1 completes the request, the memory domains included in the memory domain set may be the memory domain 0 "and the memory domain 1'. Therefore, at this time, the memory domain is extended from the single memory domain 0 to a plurality of memory domains (memory domain 0 "and memory domain 1 '), and the memory domain set including the memory domain 0" and the memory domain 1' can directly meet subsequent concurrent accesses of the CPU (e.g. concurrent accesses of the CPU2 and the CPU 1).
It should be noted that, when the memory domain 0 "and the memory domain 1 'provide services for the CPU0 and the CPU1, respectively, if the CPU2 initiates the memory allocation request at this time, then, as described in the CPU1 above, the memory domain 1' and the CPU2 may wait for a certain CPU to complete the request, release the memory domain, partition the memory domain, and then allocate a memory block for the memory domain.
It can be understood that, for the scenario of high concurrent memory request demand, the present application may dynamically divide the memory domain, so that multiple memory domains may be provided to serve multiple CPUs at the same time, thereby reducing the request response delay and improving the memory allocation efficiency.
Further, in order to facilitate understanding of the dynamic adjustment process for the memory domain, please refer to fig. 4b, fig. 4b is a schematic diagram of a scenario of merging memory domains according to an embodiment of the present application. As shown in fig. 4b, the process of merging memory domains can be divided into a redundant resource detection stage, a resource merging stage, and a memory domain compression stage. For ease of understanding, these three stages will be described below.
The redundant resource stage as shown in fig. 4b may detect redundant resources in the system. For example, as shown in fig. 4b, CPU0 may correspond to process 0, and the current memory domain set may include memory domain 0, memory domain 1, and memory domain 2, where the memory domain 0 is providing services to CPU0, and the memory domain 1 and the memory domain 2 are not providing services to any CPU, and are in idle state (also referred to as idle state, where the memory domain 1 and the memory domain 2 are both allocable memory domains). When CPU0 completes the request, i.e., memory domain 0 completes the service, CPU0 releases memory domain 0. When the CPU0 releases the memory domain 0, in practice, the memory blocks included in the memory domain 0 do not include the memory blocks already allocated to the CPU0, and for convenience of distinction, the memory domain released by the CPU0 may be referred to as a memory domain 0'. At this time, the memory domain in the system may be detected, and it is detected that the memory domain 0, the memory domain 1 and the memory domain 2 are in an idle state, and at the same time, the memory domain 1 and the memory domain 2 are in an idle state for a long time, where the concurrency condition of the memory request of the system may be considered as a low concurrency condition (i.e., the number of allocable memory domains in the system is greater). At this point, resource consolidation may be performed.
Further, as shown in fig. 4b, in the resource merge stage, the memory domains in the idle state may be merged. For example, as shown in fig. 4b, the memory domain 1 and the memory domain 2 may be combined to obtain the memory domain 3. It should be noted that, the memory domains may be combined here, or all the memory domains in an idle state may be combined, or any two memory domains may be selected to be combined. When the memory domain is selected, the memory domains which can form continuous physical pages after being combined can be preferentially selected for combination. For example, the physical page frames included in the memory domain 0 'are the physical page frame 1, the physical page frame 3, and the physical page frame 5, the physical page frames included in the memory domain 1 are the physical page frame 2, and the physical page frame 4, and the physical page frames included in the memory domain 2 are the physical page frame 8 and the physical page frame 9, and it is seen that the physical page frame 1, the physical page frame 3, the physical page frame 5, the physical page frame 2, and the physical page frame 4 may constitute the continuous pages 1 to 5, and then the memory domain 0' and the memory domain 1 may be combined preferentially.
Further, as shown in fig. 4b, in the memory domain compression stage, when the memory domain merging is completed, a newly generated memory domain (i.e., the memory domain 3) may be added to the memory domain set. That is, the memory domains included in the current memory domain set are memory domain 0' and memory domain 3.
It can be understood that, for the scenario of low concurrent memory request requirements, the present application may dynamically merge memory domains, so as to reduce fragmentation caused by memory domain boundaries and improve resource utilization.
It will be appreciated that when a process is completed, the memory blocks are released, and the released memory blocks may be added to a memory domain in an idle state (i.e., a non-serviced memory domain, i.e., an allocated memory domain). In order to reduce the resource conflict rate, the present application may provide a memory buffer, set a memory block release condition for the memory buffer, and store the memory blocks released by the process into the memory buffer first, and release the memory blocks in the memory buffer into the memory domain in the memory domain set when the memory block release condition is satisfied. For ease of understanding, please refer to fig. 5, fig. 5 is a flowchart of a data processing method according to an embodiment of the present application, where the flowchart is a flowchart of a specific method for releasing a memory block according to a memory buffer by taking a first process to release a first target memory block as an example. As shown in fig. 5, the flow may include at least the following steps S501 to S503:
In step S501, a memory release request for a first target memory block sent by a first process is received, and a storage state of a memory buffer is detected according to the memory release request.
Specifically, when a memory release request for a first target memory block sent by a first process is received, a storage state of a memory buffer area may be detected according to the memory release request. The specific implementation manner for detecting the storage state of the memory buffer area may be: historical memory release behavior information of the first process and a virtual address space can be obtained; then, according to the historical memory release behavior information and the virtual address space, determining the predicted memory release request quantity of the first process in the target time period; then, the target block number of the first target memory block and the predicted block number corresponding to the predicted memory release request amount can be obtained, and the storage state of the memory buffer area can be detected according to the target block number and the predicted block number.
One specific implementation way for detecting the storage state of the memory buffer according to the number of target blocks and the number of predicted blocks may be: the number of target blocks and the number of predicted blocks can be subjected to operation processing to obtain the total number; the residual storage capacity of the memory buffer area can be obtained; if the remaining storage capacity of the memory buffer is greater than the total number, determining the storage state of the memory buffer as a first storage state; if the remaining storage capacity of the memory buffer is smaller than the total number, the storage state of the memory buffer may be determined as the second storage state.
It can be appreciated that when a memory release request of a first process for a first target memory block is received, memory request behavior information (which may be referred to as historical memory release behavior information) generated in the past by the first process and a virtual address space segment corresponding to the first process may be collected; according to the collected historical memory release behavior information, the probability of the memory release request generated by the first process can be predicted. For example, if a large number of memory blocks belonging to the same life cycle exist in the current virtual address space segment, and it is determined that the first process has performed multiple memory release actions according to the historical memory release action information, it can be predicted that the first process will continue to generate a batch of memory release requests in a short time with a high probability. That is, the predicted memory release request amount of the first process in the target time period may be predicted according to the historical memory release behavior information and the virtual address space segment, and the number of memory blocks to be released in each memory release request may be predicted, so that the number of predicted blocks corresponding to the predicted memory release request amount (i.e., the sum of the number of predicted release blocks of each predicted memory release request) may be determined, and the storage state of the memory buffer may be detected according to the target number of blocks of the first target memory block and the number of predicted blocks.
If the total number of the target blocks and the number of the predicted blocks is smaller than the remaining storage capacity of the memory buffer, then the memory buffer may be considered to be sufficient to store the first target memory block and the predicted released memory block corresponding to the predicted memory release request, and the storage state of the memory buffer may be referred to as a first storage state (a state of sufficient storage); if the total number of the target blocks and the number of the predicted blocks is greater than the remaining storage capacity of the memory buffer, then the memory buffer may be considered as incapable of supporting storage of the first target memory block and the predicted released memory block corresponding to the predicted memory release request, and then the storage state of the memory buffer may be referred to as a first storage state (state with insufficient storage capacity).
Alternatively, for the prediction manner of determining the predicted memory release request amount of the first process in the target period, other prediction manners besides the manner of using the historical memory release behavior information and the virtual address space may be used, for example: the prediction can be performed by actively providing information through system call or by performing prediction through information such as process behavior information obtained through offline sampling, and the specific prediction mode is not limited in the application.
In step S502, if the storage state of the memory buffer is the first storage state, the first target memory block is added to the memory buffer, and when the memory block release condition is satisfied, the first target memory block in the memory buffer is released to the memory domain set.
Specifically, if the total number of the target blocks and the number of the predicted blocks is smaller than the remaining storage capacity of the memory buffer, the memory buffer may be considered to be sufficient to store the first target memory block and the predicted released memory block corresponding to the predicted memory release request (the memory buffer is in the first storage state), and then the first target memory block may be directly stored into the memory buffer without adjusting the memory buffer, and when the memory block release condition is satisfied, the stored memory blocks in the memory buffer are uniformly released.
One specific implementation for releasing the first target memory block in the memory buffer into the memory domain set may be: when the memory block release time is reached, the stored memory blocks included in the memory buffer area can be acquired; the stored memory block comprises a first target memory block; the method comprises the steps that an allocatable memory domain in a memory domain set can be obtained at the memory block release moment, and the allocatable memory domain can be used as a memory domain to be added; each memory block in the allocatable memory domain is in an idle state; the stored memory blocks may then be released into the memory domain to be added.
It is understood that the memory block release time may be an end time of the target period, for example, the target period is 2021, 10, 26, 19:00-2021, 10, 26, 19:30, where 2021, 10, 26, 19:00 is a start time of the target period, and 2021, 10, 26, 19:30 is an end time of the target period, and then the memory block release time may refer to the end time 2021, 10, 26, 19:30. The memory blocks actually released by the first process in the target time period (which may be referred to as actual released memory blocks) are also stored in the memory buffer area together, so that the stored memory blocks stored in the memory buffer area can be obtained at this time, and the stored memory blocks can be uniformly released into the memory domain set.
It will be appreciated that the stored memory blocks may all be released into the same allocatable memory domain, and that in the case where there are two or more allocatable memory domains, the stored memory blocks may also be stored in separate domains (i.e., each allocatable memory domain stores a portion of the stored memory blocks separately).
Alternatively, it is understood that the memory block release condition may be set to other conditions besides the end timestamp of the target period, for example, the memory block release condition may be that the upper limit value of the memory buffer has been reached. For example, after the memory buffer is adjusted according to the historical memory release behavior information and the virtual address space, the first process actually releases more memory blocks in the target time period, and the upper limit value of the memory buffer is reached (i.e. the capacity of the memory buffer is not enough) before the ending time of the target time period, then the stored memory blocks in the memory buffer can be released first. Other conditions may of course be set for the memory block release conditions, and one example will not be described here.
In step S503, if the storage state of the memory buffer is the second storage state, the storage capacity of the memory buffer is adjusted according to the first target memory block to obtain an adjusted memory buffer, the first target memory block is added into the adjusted memory buffer, and when the memory block release condition is satisfied, the first target memory block in the adjusted memory buffer is released into the memory domain set.
Specifically, if the total number of the target blocks and the number of the predicted blocks is greater than the remaining storage capacity of the memory buffer, the memory buffer may be considered as incapable of supporting storage of the first target memory block and the predicted released memory block (the memory buffer is in the second storage state) corresponding to the predicted memory release request, and then the boundary of the memory buffer may be adjusted (that is, the storage capacity of the memory buffer is enlarged) at this time, so as to obtain an adjusted memory buffer, so that more physical page frames may be loaded. After the adjustment, the first target memory block may be added to the memory buffer, and when the memory block release condition is satisfied, the stored memory blocks stored in the memory buffer are uniformly released.
The specific implementation manner of adjusting the memory buffer area according to the first target memory block may be: the predicted memory release request quantity of the first process in the target time period and the number of predicted blocks corresponding to the predicted memory release request quantity can be obtained; then, the target block number of the first target memory block can be obtained, and the total number corresponding to the target block number and the predicted block number is determined; then, the residual storage capacity corresponding to the memory buffer area can be obtained, and the storage capacity of the memory buffer area can be adjusted according to the residual storage capacity and the total number to obtain an adjusted memory buffer area; wherein the updated remaining storage capacity of the adjusted memory buffer is greater than the total number.
In the embodiment of the present application, the memory blocks released by the process in batches may be aggregated by a mechanism of dynamically adjusting the buffer. Meanwhile, by utilizing the history memory release behavior information of the process and the virtual address space, the subsequent potential memory release request can be predicted, so that the boundary of the buffer area can be actively adjusted, the resource conflict rate of the memory request (such as concurrent memory allocation request and memory release request) to the allocatable memory area can be reduced by the mode that the buffer area aggregates and uniformly releases the memory blocks, the waste of the memory resources limited in the memory buffer area is reduced, and the resource utilization rate is improved.
Further, for ease of understanding, please refer to fig. 6, fig. 6 is a system configuration diagram provided in an embodiment of the present application. As shown in fig. 6, an inert Layer (Lazy Layer) and a core Layer (CoreLayer) may be included in the system structure. The inert layer may include a dynamic memory buffer, and the core layer may include n memory domains (memory domain 0, memory domain 1, …, memory domain n).
It will be appreciated that when an application in the system generates a memory allocation or reclamation (also referred to as a release) request, first the memory request (including the memory allocation request and the memory release request) may be passed to an lazy tier that maintains a dynamic memory buffer for each CPU, predicts the number of subsequent memory requests in the lazy tier, and dynamically adjusts the memory buffer length based on the predicted number of memory requests. The lazy layer may interact with the core layer when certain conditions are met (e.g., a memory allocation request is issued, or memory reclamation of a process is completed, etc.). For example, when a CPU issues a memory allocation request, a part of the free page frames may be obtained from a plurality of memory domains included in the core layer, and the part of the free page frames may be sent to the inertia layer, and the inertia layer may allocate the part of the free page frames to the corresponding CPU. For example, when a memory release request is issued by a process, the released memory block may be stored in the inactive layer first, and when the memory block release condition is reached, the released memory block is released from the inactive layer into the memory domain of the core layer.
It can be understood that the core layer can maintain a global free page frame pool in the system, divide page frames into a plurality of memory domains, and dynamically adjust the number of the memory domains according to actual concurrency requirements in the system. For example, under the high concurrency condition, the number of the memory domains is expanded, a plurality of memory domains are provided for simultaneously responding to the multi-core requirement, under the low concurrency condition, the number of the memory domains is compressed, the fragmented memory domains are integrated into a single large memory domain, a resource pool is enlarged, and the fragmentation is reduced.
In this embodiment of the present application, the memory domain may be dynamically adjusted according to the concurrency requirement of the process, for example, the memory domain may be divided under the condition of high concurrency memory request, so that multiple memory domains may be provided to simultaneously respond to the memory request requirement of multiple processes, for example, when a first memory allocation request of a first process is received, the first process does not need to queue a history process (such as a process that sends the memory allocation request prior to the first process) to complete the request, but may directly determine, according to multiple memory domains in the memory domain set, a memory domain to be allocated corresponding to the first process, and allocate a first target memory block for the first process by the memory domain to be allocated. Therefore, the memory allocation time delay can be reduced, and the response efficiency of the memory request can be improved. And under the condition of low concurrent memory requests, the memory domains can be combined and integrated, and the fragmented memory domains are integrated into a large resource pool, so that the fragmentation is reduced. In summary, the method and the device can dynamically expand or merge the memory domains according to the concurrency condition of the system, can provide a plurality of memory domains to simultaneously respond to the multi-process memory allocation request under the high concurrency condition, reduce the response delay of the request and improve the memory allocation efficiency; and the memory domains can be integrated under the condition of low concurrency, so that fragmentation is reduced, and the resource utilization rate is improved. Meanwhile, the memory blocks released by the process in batches can be aggregated through a mechanism of dynamically adjusting the buffer area. By utilizing the history memory release behavior information of the process and the virtual address space, the subsequent potential memory release request can be predicted, so that the boundary of the buffer area can be actively adjusted, the resource conflict rate of the memory request (such as concurrent memory allocation request and memory release request) for the allocatable memory domain can be reduced by the way that the buffer area aggregates and uniformly releases the memory blocks, the waste of the memory resources limited in the memory buffer area is reduced, and the resource utilization rate is further improved.
Further, referring to fig. 7, fig. 7 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. The data processing apparatus may be a computer program (including program code) running in a computer device, for example the data processing apparatus is an application software; the data processing device may be used to perform the method shown in fig. 3. As shown in fig. 7, the data processing apparatus 1 may include: a request processing module 11, a memory domain determining module 12, a memory block allocation module 13 and a memory domain updating module 14.
The request processing module 11 is configured to receive a first memory allocation request sent by a first process, and obtain a memory domain set according to the first memory allocation request; each memory domain in the memory domain set consists of one or more memory blocks;
the memory domain determining module 12 is configured to determine, according to the memory domain set, a memory domain to be allocated corresponding to the first process;
the memory block allocation module 13 is configured to determine, according to the first memory allocation request, a first target memory block corresponding to the first process from one or more memory blocks included in the memory domain to be allocated, and allocate the first target memory block to the first process;
the memory domain updating module 14 is configured to update a memory domain to be allocated in the memory domain set to an updated memory domain according to the first target memory block; the memory domain set including the updated memory domain is configured to allocate a second target memory block for the second process when a second memory allocation request of the second process is received.
The specific implementation manners of the request processing module 11, the memory domain determining module 12, the memory block allocation module 13, and the memory domain updating module 14 may be referred to the description of step S101 to step S103 in the embodiment corresponding to fig. 3, and will not be described herein.
In one embodiment, the memory domain determination module 12 may include: the first domain detecting unit 121 and the first domain acquiring unit 122.
A first domain detecting unit 121, configured to detect a memory domain in the memory domain set;
the first domain obtaining unit 122 is configured to obtain a to-be-allocated memory domain corresponding to the first process in the allocatable memory domain if it is detected that the memory domain set has the allocatable memory domain, and switch the to-be-allocated memory domain from the idle state to the locked state; each memory block in the allocatable memory domain is in an idle state; there is no memory block in the idle state in the memory domain to be allocated in the locked state.
The specific implementation manner of the first domain detecting unit 121 and the first domain obtaining unit 122 may refer to the description of step S102 in the embodiment corresponding to fig. 3, which will not be described herein.
In one embodiment, the memory domain determination module 12 may include: the second domain detecting unit 123 and the second domain acquiring unit 124.
A second domain detecting unit 123, configured to detect a memory domain in the memory domain set;
the second domain obtaining unit 124 is configured to set the first process to a waiting state if it is detected that no allocatable memory domain exists in the memory domain set, switch the first process from the waiting state to an allocation state when it is detected that the allocatable memory domain reappears in the memory domain set, obtain a memory domain to be allocated corresponding to the first process in the allocation state from the allocatable memory domain, and switch the memory domain to be allocated from the idle state to a locking state; each memory block in the allocatable memory domain is in an idle state; there is no memory block in the idle state in the memory domain to be allocated in the locked state.
The specific implementation manner of the second domain detecting unit 123 and the second domain obtaining unit 124 may refer to the description of step S102 in the embodiment corresponding to fig. 3, which will not be described herein.
In one embodiment, the memory domain update module 14 may include: the block determination unit 141 and the domain composition unit 142.
A block determining unit 141, configured to use the memory blocks in the memory to be allocated except the first target memory block as the first remaining memory blocks;
A domain component unit 142, configured to update the memory domain according to the first remaining memory block composition; the updated memory domain in the memory domain set belongs to the allocatable memory domain.
For a specific implementation manner of the block determining unit 141 and the domain forming unit 142, reference may be made to the description of step S103 in the embodiment corresponding to fig. 3, and the description will not be repeated here.
In one embodiment, the memory domain update module 14 may include: the domain dividing unit 143 and the update domain determining unit 144.
The domain dividing unit 143 is configured to perform domain division on the memory domain to be allocated according to the first target memory block, so as to obtain a first divided memory domain and a second divided memory domain;
an update domain determining unit 144, configured to determine the first divided memory domain and the second divided memory domain as update memory domains; the updated memory domain in the memory domain set belongs to the allocatable memory domain.
For a specific implementation manner of the domain dividing unit 143 and the update domain determining unit 144, reference may be made to the description of step S103 in the embodiment corresponding to fig. 3, and the description will not be repeated here.
In one embodiment, the domain dividing unit 143 may include: a block to be divided determination sub-unit 1431, a difference block acquisition sub-unit 1432, and a division domain composition sub-unit 1433.
A block to be partitioned determination subunit 1431, configured to determine, according to the first target memory block, a memory block to be partitioned in the memory domain to be allocated; the number of the memory blocks to be divided is larger than that of the first target memory blocks;
a difference block obtaining subunit 1432, configured to obtain a difference memory block between the memory block to be divided and the first target memory block;
a divided domain composing subunit 1433, configured to compose a first divided memory domain according to the difference memory block, and compose a second divided memory domain according to the second remaining memory block; the second remaining memory blocks refer to memory blocks in the memory domain to be allocated other than the memory blocks to be partitioned.
The specific implementation manner of the block determination sub-unit 1431 to be divided, the difference block obtaining sub-unit 1432, and the dividing domain composing sub-unit 1433 may be referred to the description of step S103 in the embodiment corresponding to fig. 3, and will not be described herein.
In one embodiment, the memory domain determination module 12 may further include: the domain number determining unit 125 and the domain merging unit 126.
A domain number determining unit 125, configured to obtain the remaining memory domains of the allocatable memory domains except the memory domain to be allocated, and the domain number of the remaining memory domains;
A domain merging unit 126, configured to, when the number of domains is greater than the number threshold, perform domain merging on the remaining memory domains to obtain a merged memory domain; the combined memory domain in the memory domain set belongs to the allocatable memory domain.
The specific implementation manner of the domain number determining unit 125 and the domain merging unit 126 may refer to the description of step S102 in the embodiment corresponding to fig. 3, which will not be described herein.
In one embodiment, the data processing apparatus 1 may further include: a state detection module 15, a first memory block release module 16 and a second memory block release module 17.
The state detection module 15 is configured to receive a memory release request for a first target memory block sent by a first process;
the state detection module 15 is further configured to detect a storage state of the memory buffer according to the memory release request;
a first memory block release module 16, configured to add the first target memory block to the memory buffer if the storage state of the memory buffer is the first storage state;
the first memory block release module 16 is further configured to release the first target memory block in the memory buffer into the memory domain set when the memory block release condition is satisfied;
A second memory block release module 17, configured to adjust the storage capacity of the memory buffer according to the first target memory block if the storage state of the memory buffer is the second storage state, so as to obtain an adjusted memory buffer;
the second memory block release module 17 is further configured to add the first target memory block to the adjusted memory buffer, and release the first target memory block in the adjusted memory buffer to the memory domain set when the memory block release condition is satisfied.
The specific implementation manners of the state detection module 15, the first memory block release module 16, and the second memory block release module 17 may be referred to the description of step S501 to step S503 in the embodiment corresponding to fig. 5, and will not be described herein.
In one embodiment, the status detection module 15 may include: a process information acquisition unit 151, a prediction amount determination unit 152, and a state detection unit 153.
A process information acquiring unit 151, configured to acquire historical memory release behavior information of a first process and a virtual address space;
a predicted amount determining unit 152, configured to determine a predicted memory release request amount of the first process in the target time period according to the historical memory release behavior information and the virtual address space;
The state detecting unit 153 is configured to obtain a target block number of the first target memory block and a predicted block number corresponding to the predicted memory release request amount, and detect a storage state of the memory buffer according to the target block number and the predicted block number.
The specific implementation manner of the process information obtaining unit 151, the predicted amount determining unit 152, and the state detecting unit 153 may be referred to the description of step S501 in the embodiment corresponding to fig. 5, and will not be described herein.
In one embodiment, the state detection unit 153 may include: an operation subunit 1531, a capacity acquisition subunit 1532, and a state determination subunit 1533.
An operation subunit 1531, configured to perform operation processing on the number of target blocks and the number of predicted blocks to obtain a total number;
a capacity acquisition subunit 1532, configured to acquire the remaining storage capacity of the memory buffer;
a state determining subunit 1533, configured to determine the storage state of the memory buffer as the first storage state if the remaining storage capacity of the memory buffer is greater than the total number;
the state determining subunit 1533 is further configured to determine the storage state of the memory buffer as the second storage state if the remaining storage capacity of the memory buffer is less than the total number.
For specific implementation manners of the operation subunit 1531, the capacity acquisition subunit 1532, and the state determination subunit 1533, reference may be made to the description of step S501 in the embodiment corresponding to fig. 5, which will not be repeated here.
In one embodiment, the first memory block release module 16 may include: a stored block acquisition unit 161, a predetermined unit 162, and a block addition unit 163.
A stored block obtaining unit 161, configured to obtain a stored memory block included in the memory buffer when a memory block release time is reached; the stored memory blocks include a first target memory block;
a pre-determining unit 162, configured to obtain an allocatable memory domain in the memory domain set at a memory block release time, and use the allocatable memory domain as a memory domain to be added; each memory block in the allocatable memory domain is in an idle state;
the block adding unit 163 is configured to release the stored memory block into the memory domain to be added.
The specific implementation manner of the stored block obtaining unit 161, the predetermined unit 162 and the block adding unit 163 may be referred to the description of step S502 in the embodiment corresponding to fig. 5, and will not be described herein.
In one embodiment, the second memory block release module 17 may include: the number determination unit 171 and the capacity adjustment unit 172.
A number determining unit 171, configured to obtain a predicted memory release request amount of the first process in the target period, and a predicted block number corresponding to the predicted memory release request amount;
the number determining unit 171 is further configured to obtain a target block number of the first target memory block, and determine a total number of target blocks corresponding to the predicted block number;
the capacity adjustment unit 172 is configured to obtain a remaining storage capacity corresponding to the memory buffer, and adjust the storage capacity of the memory buffer according to the remaining storage capacity and the total number, so as to obtain an adjusted memory buffer; the updated remaining storage capacity of the adjusted memory buffer is greater than the total number.
For specific implementation manners of the number determining unit 171 and the capacity adjusting unit 172, reference may be made to the description of step S503 in the embodiment corresponding to fig. 5, and the description will not be repeated here.
In this embodiment of the present application, the memory domain may be dynamically adjusted according to the concurrency requirement of the process, for example, the memory domain may be divided under the condition of high concurrency memory request, so that multiple memory domains may be provided to simultaneously respond to the memory request requirement of multiple processes, for example, when a first memory allocation request of a first process is received, the first process does not need to queue a history process (such as a process that sends the memory allocation request prior to the first process) to complete the request, but may directly determine, according to multiple memory domains in the memory domain set, a memory domain to be allocated corresponding to the first process, and allocate a first target memory block for the first process by the memory domain to be allocated. Therefore, the memory allocation time delay can be reduced, and the response efficiency of the memory request can be improved. And under the condition of low concurrent memory requests, the memory domains can be combined and integrated, and the fragmented memory domains are integrated into a large resource pool, so that the fragmentation is reduced. In summary, the method and the device can dynamically expand or merge the memory domains according to the concurrency condition of the system, can provide a plurality of memory domains to simultaneously respond to the multi-process memory allocation request under the high concurrency condition, reduce the response delay of the request and improve the memory allocation efficiency; and the memory domains can be integrated under the condition of low concurrency, so that fragmentation is reduced, and the resource utilization rate is improved. Meanwhile, the memory blocks released by the process in batches can be aggregated through a mechanism of dynamically adjusting the buffer area. By utilizing the history memory release behavior information of the process and the virtual address space, the subsequent potential memory release request can be predicted, so that the boundary of the buffer area can be actively adjusted, the resource conflict rate of the memory request (such as concurrent memory allocation request and memory release request) for the allocatable memory domain can be reduced by the way that the buffer area aggregates and uniformly releases the memory blocks, the waste of the memory resources limited in the memory buffer area is reduced, and the resource utilization rate is further improved.
Further, referring to fig. 8, fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 8, the data processing apparatus 1 in the embodiment corresponding to fig. 7 may be applied to the computer device 8000, and the computer device 8000 may include: processor 8001, network interface 8004, and memory 8005, and further, the above-described computer device 8000 further includes: a user interface 8003, and at least one communication bus 8002. Wherein a communication bus 8002 is used to enable connected communications between these components. The user interface 8003 may include a Display screen (Display), a Keyboard (Keyboard), and the optional user interface 8003 may also include standard wired, wireless interfaces, among others. Network interface 8004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). Memory 8005 may be a high speed RAM memory or a non-volatile memory, such as at least one disk memory. Memory 8005 may optionally also be at least one memory device located remotely from the aforementioned processor 8001. As shown in fig. 8, an operating system, a network communication module, a user interface module, and a device control application program may be included in the memory 8005, which is one type of computer-readable storage medium.
In the computer device 8000 shown in fig. 8, the network interface 8004 may provide a network communication function; while user interface 8003 is primarily an interface for providing input to the user; and the processor 8001 may be used to invoke a device control application stored in the memory 8005 to implement:
receiving a first memory allocation request sent by a first process, and acquiring a memory domain set according to the first memory allocation request; each memory domain in the memory domain set consists of one or more memory blocks;
determining a memory domain to be allocated corresponding to a first process according to a memory domain set, determining a first target memory block corresponding to the first process in one or more memory blocks included in the memory domain to be allocated according to a first memory allocation request, and allocating the first target memory block to the first process;
updating the memory domain to be allocated in the memory domain set into an updated memory domain according to the first target memory block; the memory domain set including the updated memory domain is configured to allocate a second target memory block for the second process when a second memory allocation request of the second process is received.
It should be understood that the computer device 8000 described in the embodiment of the present application may perform the description of the data processing method in the embodiment corresponding to fig. 2 to 6, and may also perform the description of the data processing apparatus 1 in the embodiment corresponding to fig. 7, which is not repeated herein. In addition, the description of the beneficial effects of the same method is omitted.
Furthermore, it should be noted here that: the embodiments of the present application further provide a computer readable storage medium, where a computer program executed by the computer device 1000 for data processing mentioned above is stored, and the computer program includes program instructions, when executed by the processor, can perform the description of the data processing method in the embodiments corresponding to fig. 2 to 6, and therefore, the description will not be repeated here. In addition, the description of the beneficial effects of the same method is omitted. For technical details not disclosed in the embodiments of the computer-readable storage medium according to the present application, please refer to the description of the method embodiments of the present application.
The computer readable storage medium may be the data processing apparatus provided in any one of the foregoing embodiments or an internal storage unit of the computer device, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) card, a flash card (flash card) or the like, which are provided on the computer device. Further, the computer-readable storage medium may also include both internal storage units and external storage devices of the computer device. The computer-readable storage medium is used to store the computer program and other programs and data required by the computer device. The computer-readable storage medium may also be used to temporarily store data that has been output or is to be output.
In one aspect of the present application, a computer program product or computer program is provided that includes computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the method provided in an aspect of the embodiments of the present application.
The terms first, second and the like in the description and in the claims and drawings of the embodiments of the present application are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the term "include" and any variations thereof is intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or device that comprises a list of steps or elements is not limited to the list of steps or modules but may, in the alternative, include other steps or modules not listed or inherent to such process, method, apparatus, article, or device.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The methods and related devices provided in the embodiments of the present application are described with reference to the method flowcharts and/or structure diagrams provided in the embodiments of the present application, and each flowchart and/or block of the method flowcharts and/or structure diagrams may be implemented by computer program instructions, and combinations of flowcharts and/or blocks in the flowchart and/or block diagrams. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or structural diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or structures.
The foregoing disclosure is only illustrative of the preferred embodiments of the present application and is not intended to limit the scope of the claims herein, as the equivalent of the claims herein shall be construed to fall within the scope of the claims herein.

Claims (16)

1. A method of data processing, comprising:
receiving a first memory allocation request sent by a first process, and acquiring a memory domain set according to the first memory allocation request; each memory domain in the memory domain set consists of one or more memory blocks;
determining a memory domain to be allocated corresponding to the first process according to the memory domain set, determining a first target memory block corresponding to the first process in one or more memory blocks included in the memory domain to be allocated according to the first memory allocation request, and allocating the first target memory block to the first process;
updating the memory domain to be allocated in the memory domain set into an updated memory domain according to the first target memory block; and the memory domain set containing the updated memory domain is used for distributing a second target memory block for the second process when receiving a second memory distribution request of the second process.
2. The method of claim 1, wherein the determining, according to the set of memory domains, a memory domain to be allocated corresponding to the first process includes:
detecting the memory domain in the memory domain set;
if the memory domain set is detected to have the memory domain capable of being allocated, acquiring a memory domain to be allocated corresponding to the first process in the memory domain capable of being allocated, and switching the memory domain to be allocated from an idle state to a locking state; each memory block in the allocatable memory domain is in an idle state; there is no memory block in the idle state in the memory domain to be allocated in the locked state.
3. The method of claim 1, wherein the determining, according to the set of memory domains, a memory domain to be allocated corresponding to the first process includes:
detecting the memory domain in the memory domain set;
if no allocable memory domain exists in the memory domain set, setting the first process to be in a waiting state, switching the first process to be in an allocation state from the waiting state when the allocable memory domain reappears in the memory domain set, acquiring the memory domain to be allocated corresponding to the first process in the allocation state from the allocable memory domain, and switching the memory domain to be allocated from the idle state to a locking state; each memory block in the allocatable memory domain is in an idle state; there is no memory block in the idle state in the memory domain to be allocated in the locked state.
4. The method of claim 2, wherein updating the memory domain to be allocated in the memory domain set to an updated memory domain according to the first target memory block comprises:
taking the memory blocks except the first target memory block in the memory blocks to be allocated as first residual memory blocks;
forming an updated memory domain according to the first residual memory block; the updated memory domain in the memory domain set belongs to the allocatable memory domain.
5. The method of claim 3, wherein updating the memory domain to be allocated in the memory domain set to an updated memory domain according to the first target memory block comprises:
performing domain division on the memory domain to be allocated according to the first target memory block to obtain a first divided memory domain and a second divided memory domain;
determining the first divided memory domain and the second divided memory domain as updated memory domains; the updated memory domain in the memory domain set belongs to the allocatable memory domain.
6. The method of claim 5, wherein the performing domain division on the memory domain to be allocated according to the first target memory block to obtain a first divided memory domain and a second divided memory domain includes:
Determining memory blocks to be divided in the memory domain to be allocated according to the first target memory block; the number of the memory blocks to be divided is larger than that of the first target memory blocks;
acquiring a difference memory block between the memory block to be divided and the first target memory block;
forming the first divided memory domain according to the difference memory blocks, and forming a second divided memory domain according to the second residual memory blocks; the second remaining memory block refers to a memory block in the memory to be allocated except the memory block to be divided.
7. The method according to claim 2, wherein the method further comprises:
acquiring the residual memory domains except the memory domain to be allocated in the allocatable memory domain and the domain number of the residual memory domains;
when the number of the domains is larger than a number threshold, carrying out domain merging on the residual memory domains to obtain merged memory domains; the merged memory domain in the memory domain set belongs to the allocatable memory domain.
8. The method according to claim 1, wherein the method further comprises:
receiving a memory release request for the first target memory block sent by the first process, and detecting the storage state of a memory buffer area according to the memory release request;
If the storage state of the memory buffer is a first storage state, adding the first target memory block into the memory buffer, and releasing the first target memory block in the memory buffer into the memory domain set when a memory block release condition is met;
and if the storage state of the memory buffer is a second storage state, adjusting the storage capacity of the memory buffer according to the first target memory block to obtain an adjusted memory buffer, adding the first target memory block into the adjusted memory buffer, and releasing the first target memory block in the adjusted memory buffer into the memory domain set when the memory block release condition is met.
9. The method of claim 8, wherein detecting the memory state of the memory buffer based on the memory release request comprises:
acquiring historical memory release behavior information of the first process and a virtual address space;
determining a predicted memory release request amount of the first process in a target time period according to the historical memory release behavior information and the virtual address space;
And acquiring the target block number of the first target memory block and the predicted block number corresponding to the predicted memory release request amount, and detecting the storage state of the memory buffer according to the target block number and the predicted block number.
10. The method of claim 9, wherein detecting the memory state of the memory buffer based on the target block number and the predicted block number comprises:
performing operation processing on the target block number and the predicted block number to obtain a total number;
acquiring the residual storage capacity of the memory buffer area;
if the remaining storage capacity of the memory buffer area is greater than the total number, determining the storage state of the memory buffer area as the first storage state;
and if the remaining storage capacity of the memory buffer is smaller than the total number, determining the storage state of the memory buffer as the second storage state.
11. The method of claim 8, wherein releasing the first target memory block in the memory buffer into the set of memory domains when a memory block release condition is satisfied comprises:
When the memory block release time is reached, acquiring stored memory blocks included in the memory buffer area; the stored memory block comprises the first target memory block;
acquiring an allocable memory domain in the memory domain set at the memory block release moment, and taking the allocable memory domain as a memory domain to be added; each memory block in the allocatable memory domain is in an idle state;
and releasing the stored memory block into the memory domain to be added.
12. The method of claim 1, wherein adjusting the storage capacity of the memory buffer according to the first target memory block to obtain an adjusted memory buffer comprises:
acquiring a predicted memory release request quantity of the first process in a target time period and a predicted block quantity corresponding to the predicted memory release request quantity;
acquiring the target block number of the first target memory block, and determining the total number of the target block number and the predicted block number;
obtaining the residual storage capacity corresponding to the memory buffer area, and adjusting the storage capacity of the memory buffer area according to the residual storage capacity and the total number to obtain the adjusted memory buffer area; the updated remaining storage capacity of the adjusted memory buffer is greater than the total number.
13. A data processing apparatus, comprising:
the request processing module is used for receiving a first memory allocation request sent by a first process and acquiring a memory domain set according to the first memory allocation request; each memory domain in the memory domain set consists of one or more memory blocks;
the memory domain determining module is used for determining a memory domain to be allocated corresponding to the first process according to the memory domain set;
the memory block allocation module is used for determining a first target memory block corresponding to the first process from one or more memory blocks included in the memory domain to be allocated according to the first memory allocation request, and allocating the first target memory block to the first process;
the memory domain updating module is used for updating the memory domain to be allocated in the memory domain set into an updated memory domain according to the first target memory block; and the memory domain set containing the updated memory domain is used for distributing a second target memory block for the second process when receiving a second memory distribution request of the second process.
14. A computer device, comprising: a processor, a memory, and a network interface;
The processor is connected to the memory, the network interface for providing network communication functions, the memory for storing program code, the processor for invoking the program code to cause the computer device to perform the method of any of claims 1-12.
15. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program adapted to be loaded by a processor and to perform the method of any of claims 1-12.
16. A computer program product or computer program, characterized in that it comprises computer instructions stored in a computer-readable storage medium, which are adapted to be read and executed by a processor to cause a computer device with the processor to perform the method of any of claims 1-12.
CN202111293160.6A 2021-11-03 2021-11-03 Data processing method, device, equipment and readable storage medium Pending CN116069493A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111293160.6A CN116069493A (en) 2021-11-03 2021-11-03 Data processing method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111293160.6A CN116069493A (en) 2021-11-03 2021-11-03 Data processing method, device, equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116069493A true CN116069493A (en) 2023-05-05

Family

ID=86168687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111293160.6A Pending CN116069493A (en) 2021-11-03 2021-11-03 Data processing method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116069493A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116450055A (en) * 2023-06-15 2023-07-18 支付宝(杭州)信息技术有限公司 Method and system for distributing storage area between multi-processing cards
CN116541180A (en) * 2023-07-07 2023-08-04 荣耀终端有限公司 Memory allocation method, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116450055A (en) * 2023-06-15 2023-07-18 支付宝(杭州)信息技术有限公司 Method and system for distributing storage area between multi-processing cards
CN116450055B (en) * 2023-06-15 2023-10-27 支付宝(杭州)信息技术有限公司 Method and system for distributing storage area between multi-processing cards
CN116541180A (en) * 2023-07-07 2023-08-04 荣耀终端有限公司 Memory allocation method, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20200328984A1 (en) Method and apparatus for allocating resource
US10120705B2 (en) Method for implementing GPU virtualization and related apparatus, and system
US20180027061A1 (en) Method and apparatus for elastically scaling virtual machine cluster
CN109379448B (en) File distributed deployment method and device, electronic equipment and storage medium
US20190196875A1 (en) Method, system and computer program product for processing computing task
CN113467958B (en) Data processing method, device, equipment and readable storage medium
WO2020177564A1 (en) Vnf life cycle management method and apparatus
CN110196843B (en) File distribution method based on container cluster and container cluster
WO2024066828A1 (en) Data processing method and apparatus, and device, computer-readable storage medium and computer program product
CN112346871A (en) Request processing method and micro-service system
CN106302640A (en) Data request processing method and device
CN116069493A (en) Data processing method, device, equipment and readable storage medium
CN112600761A (en) Resource allocation method, device and storage medium
CN112988346B (en) Task processing method, device, equipment and storage medium
CN114625533A (en) Distributed task scheduling method and device, electronic equipment and storage medium
CN111510493B (en) Distributed data transmission method and device
CN111078516A (en) Distributed performance test method and device and electronic equipment
CN114116220B (en) GPU sharing control method, GPU sharing control device and storage medium
CN117632457A (en) Method and related device for scheduling accelerator
CN116244231A (en) Data transmission method, device and system, electronic equipment and storage medium
US9483317B1 (en) Using multiple central processing unit cores for packet forwarding in virtualized networks
CN116841720A (en) Resource allocation method, apparatus, computer device, storage medium and program product
CN113703906A (en) Data processing method, device and system
CN114090249A (en) Resource allocation method, device, electronic equipment and storage medium
CN113271229B (en) Equipment control method and device, storage equipment, safety equipment, switch, router and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination