[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113918483B - Multi-master device cache control method and system - Google Patents

Multi-master device cache control method and system Download PDF

Info

Publication number
CN113918483B
CN113918483B CN202111518586.7A CN202111518586A CN113918483B CN 113918483 B CN113918483 B CN 113918483B CN 202111518586 A CN202111518586 A CN 202111518586A CN 113918483 B CN113918483 B CN 113918483B
Authority
CN
China
Prior art keywords
cache
group
master
access
master device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111518586.7A
Other languages
Chinese (zh)
Other versions
CN113918483A (en
Inventor
巩少辉
张力航
刘雄飞
叶巧玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Semidrive Technology Co Ltd
Original Assignee
Nanjing Semidrive Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Semidrive Technology Co Ltd filed Critical Nanjing Semidrive Technology Co Ltd
Priority to CN202111518586.7A priority Critical patent/CN113918483B/en
Publication of CN113918483A publication Critical patent/CN113918483A/en
Application granted granted Critical
Publication of CN113918483B publication Critical patent/CN113918483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0893Caches characterised by their organisation or structure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A multi-master cache control method comprises the following steps: 1) grouping accesses from a master device, and judging a master device group to which the accesses belong; 2) dividing a cache space into a plurality of cache units and distributing the cache units to a main equipment group; 3) receiving read access from any main equipment, and searching required data in a cache space; 4) the data is returned to the master device. The invention also provides a multi-master cache control system, which can reduce the reading and writing times of the memory, prolong the service life of the memory and improve the data access efficiency of the on-chip master under the condition that the plurality of masters access the off-chip nonvolatile memory.

Description

Multi-master device cache control method and system
Technical Field
The invention relates to the technical field of memory access control, in particular to a multi-master cache control method and a multi-master cache control system.
Background
In board-level systems based on MCU/MPU, using an off-chip non-volatile memory for access of programs or data is a widely used solution. The read and write speed of the off-chip memory has a direct impact on system performance. As the integration level and complexity of chips increase, multi-core processors and other main devices are often included in the same chip. Boot code and software complexity are also further increased. New demands are made on the storage capacity, read-write rate, and the like of the nonvolatile memory.
In the prior art, the capacity and the read/write speed of the off-chip nonvolatile memory are continuously increased, wherein the increase of the read/write speed is mainly realized by a higher frequency clock, a double-edge sampling (DDR), a wider data bus (4-line flash, 8-line flash, and super bus), and an improvement in a protocol to reduce the overhead. Corresponding to the upgrading of the memory, the on-chip controller also provides the support of various frequencies and working modes; on the other hand, the number of times of directly accessing an off-chip memory is reduced by increasing on-chip cache, so that the read-write speed is improved.
Existing on-chip controllers can operate in different modes depending on the configuration to support multiple types, rates of off-chip non-volatile memory accesses. Some schemes also incorporate on-chip caching and allocate cache pages based on the type of memory to reduce off-chip accesses. However, in scenarios where multiple masters share off-chip memory, alternating accesses by different masters defeats a single caching scheme. Because the read-write speed of the nonvolatile memory is difficult to be rapidly improved in a short period, and the existing scheme of increasing the cache in the chip cannot adapt to the complex scene of the multi-master device, the access performance under the complex application is limited.
Disclosure of Invention
In order to solve the defects of the prior art, the invention aims to provide a multi-master cache control method and a multi-master cache control system, when multiple masters in a chip access an off-chip nonvolatile memory, cache space is allocated, so as to meet the requirements of different masters on program and data access, reduce the access frequency of the off-chip memory, prolong the service life of the memory and improve the access efficiency of a system bus.
In order to achieve the above object, the method and system for controlling the cache of multiple master devices provided by the present invention comprises the following steps:
1) grouping accesses from a master device, and judging a master device group to which the accesses belong;
2) dividing a cache space into a plurality of cache units and distributing the cache units to a main equipment group;
3) receiving read access from any main equipment, and searching required data in a cache space;
4) the data is returned to the master device.
Further, the step 1) further comprises the steps of,
and according to the ID of the master device, the transmission ID, the grouping mask and the preset matching value of each group, grouping AXI bus access from the master device, and judging the master device group to which the access belongs.
Further, the step 2) further includes dividing the buffer space into N equal parts, each part serving as a buffer unit, and allocating the buffer units to different master device groups, where N is an integer greater than or equal to 1; one or more cache units allocated to the master device group are a cache group.
Further, configuring the number of exclusive cache units of each main equipment group in a static mode; the cache units that are not allocated are allocated to different cache groups in a dynamic manner, called dynamic cache units.
Further, the step of dynamically allocating the cache units to different cache groups further comprises,
pre-configuring priority levels for each master device group;
maintaining the activity according to the access frequency of each main equipment group;
each dynamic cache unit records a current main equipment group;
and when the accessed data does not exist and the following conditions are met, allocating the dynamic cache unit to the main equipment group corresponding to the current access:
trans_priority + trans_active >= buffer_priority + buffer_active + reassign_margin,
wherein, trans _ priority is the priority of the master device group initiating the current access, trans _ active is the activity value of the master device group initiating the current access, buffer _ priority is the priority of the master device group to which the dynamic cache unit currently belongs, buffer _ active is the activity value of the master device group to which the dynamic cache unit currently belongs, and associativity _ margin is the preset cache reallocation margin.
Further, the liveness calculation rule is as follows:
according to whether each access is hit or not, the activity value of the main equipment group can be increased in a configuration mode;
when the access is hit, the configurable activity is increased by 0-255;
when the access is not hit and is not allocated to a new dynamic cache unit, the allocable activity degree is increased by 0-255;
when the access misses and is allocated to a new dynamic cache location, the liveness is cleared.
To achieve the above object, the present invention further provides a multi-master cache control system, comprising a plurality of masters, a nonvolatile memory read/write controller, a cache control unit, a static random access memory, and a nonvolatile memory, wherein,
the plurality of masters sending access requests to the non-volatile memory read write controller over an AXI bus;
the nonvolatile storage read-write controller is used for grouping access requests of the plurality of main devices;
the cache control unit divides the static random access memory as a cache space into a plurality of cache units and distributes the cache units to different main equipment groups;
the static random access memory provides a cache space for data, receives the instruction of the cache control unit and divides the cache space into a plurality of cache units;
and the nonvolatile memory receives the instruction of the nonvolatile memory read-write controller and reads and writes data.
In order to achieve the above object, the present invention further provides a control chip, which includes the above multi-master cache control system.
In order to achieve the above object, the present invention further provides an electronic device, which includes the above control chip.
To achieve the above object, the present invention further provides a computer-readable storage medium, on which a computer program is stored, the computer program executing the steps of the multi-master cache control method described above when running.
The multi-master cache control method has the following beneficial effects:
aiming at the condition that a plurality of main devices of a complex SoC access an off-chip nonvolatile memory, grouping the main devices, distributing limited on-chip cache resources to different main device groups in a static mode and a dynamic mode, and adaptively adjusting according to access frequency and characteristics so as to optimize access efficiency and reduce the read-write times of the nonvolatile memory; under the condition that a plurality of main devices access the off-chip nonvolatile memory, the read-write times of the memory can be reduced, the service life of the memory is prolonged, and the data access efficiency of the on-chip main devices is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a multi-master cache control method according to the present invention;
FIG. 2 is a diagram illustrating a dynamic allocation of cache units according to the present invention;
fig. 3 is a block diagram of a multi-master cache control system according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
In the embodiment of the invention, under the condition that a plurality of main devices of the SoC access the off-chip nonvolatile memory, the main devices can be grouped, limited on-chip cache resources are distributed to different main device groups in a static mode and a dynamic mode, and the number of times of reading and writing the nonvolatile memory is reduced by self-adaptive adjustment according to the access frequency and the characteristics.
Example 1
Fig. 1 is a flowchart of a multi-master cache control method according to the present invention, and the multi-master cache control method of the present invention will be described in detail with reference to fig. 1.
First, in step 101, AXI bus accesses from a master are grouped.
In the embodiment of the present invention, in a scenario of multiple masters, AXI bus accesses from the masters are first grouped. According to two identifiers of a main device ID (MID) and a transmission ID (XID), a Group Mask (Mask) and a preset matching value (Match) of each Group are combined to judge a device Group (Master Group) to which an access belongs, and the specific rules are as follows:
{MID, XID} & Mask == Match(M)
when the above condition is satisfied, it is considered that the device group to which the access belongs is the M-th group (group M). When the plurality of equipment groups all meet the condition, the group with the low serial number takes effect. MID + XID is combined as an identifier to accommodate different grouping scenarios, static (MID) and dynamic (XID). The matching mode of Mask + Match is adopted, flexibility is provided, and a plurality of main devices are divided into the same main device group.
In step 102, the buffer space is divided into a plurality of buffer units and allocated to different master device groups.
In the embodiment of the present invention, a cache space in a chip is divided into N equal parts (N is an integer greater than or equal to 1), and each part is referred to as a buffer unit (buffer unit). In order to meet the access requirements of different main devices, cache units are allocated to different main device groups (master groups) in various ways. One or more buffer units allocated to one master device group are referred to as one buffer group (buffer group).
In step 103, a read access is received from any master device.
In the embodiment of the invention, the nonvolatile memory read-write controller receives read access from any main device and searches required data in all cache spaces through the cache controller.
In step 104, it is determined whether the corresponding address is found in the cache space.
In the embodiment of the invention, if the corresponding address (hit) is found in the cache space, the step 106 is carried out, otherwise, the next step is carried out.
In step 105, a cache unit is selected from the corresponding cache group, data is read from the external memory to replace the current data, and the data is returned to the master device.
In the embodiment of the invention, if the data cache space requested to be accessed does not exist (miss), a cache unit (replacement algorithm LRU) is selected in the corresponding cache set, the data is read from the external memory to replace the current data (load), and the current data is returned to the main device.
At step 106, the data is returned to the master device.
In the embodiment of the invention, the read access from any main equipment searches the required data in all the cache spaces, and if the corresponding address (hit) is found, the data is returned to the main equipment. In this case there is no need to distinguish between different master groups and no need to read off-chip memory.
In the embodiment of the invention, the number of the available cache units of each main equipment group is distributed by a dynamic mode and a static mode. Configuring the number of exclusive cache units (private buffers) of each master device group in a static manner, and dynamically allocating the unassigned cache units to different cache groups, as shown in fig. 2, the rules for allocating the dynamic cache units are as follows:
1) pre-configuring priority levels (0-255) for each master group
2) According to the frequency of accessing the memory by each master group, maintaining the activity (active level), wherein the activity calculation rule is as follows:
a) the master access hit or miss (load) may be configured to increase the master group activity value
b) Each hit configurable liveness increase of 0-255
c) When miss occurs but is not allocated to a new dynamic cache unit, an increase of 0-255 for liveness may be configured
d) When miss occurs and a new dynamic cache unit is allocated, the liveness is cleared.
3) And each dynamic cache unit records the current main equipment group to which the dynamic cache unit belongs. And judging whether to reassign the unit to another main equipment group or not according to the current priority level and activity value of the main equipment group and the attribute of the next access.
4) And when the miss is accessed and the following conditions are met, the dynamic cache unit is distributed to the main equipment group corresponding to the current access:
trans_priority + trans_active >= buffer_priority + buffer_active + reassign_margin,
wherein trans _ priority/active is the priority and activity value of the master device group initiating the current access. buffer _ priority/active is the priority and activity value of the master device group to which the dynamic cache unit currently belongs. The reassign _ margin is a preset buffer reallocation margin. When a dynamic cache unit is idle, i.e. does not belong to any master device group, the buffer _ priority/buffer _ active/reassign _ margin values are all 0.
According to the cache unit allocation rule, on the premise of meeting the basic requirements of each main equipment group (private buffer), a part of cache space is dynamically allocated according to the access frequency and the characteristics, so that the access efficiency is optimized, and the read-write times of the off-chip nonvolatile memory are reduced.
Example 2
In an embodiment of the present invention, a multi-master cache control system is further provided. Fig. 3 is a structural diagram of a multi-master cache control system according to the present invention, and as shown in fig. 3, the multi-master cache control system of the present invention includes a plurality of masters 10, a nonvolatile memory read/write controller 20, a cache control unit 30, a static random access memory 40, and a nonvolatile memory 50, wherein,
a plurality of masters 10 that transmit access requests to the nonvolatile memory read/write controller 20 through the AXI bus;
the nonvolatile memory read/write controller 20 groups access requests from a plurality of masters 10.
In the embodiment of the present invention, the nonvolatile memory read/write controller 20 determines a device Group (Master Group) to which the access request belongs according to two identifiers, i.e., a Master device id (mid) and a transfer id (xid), in combination with a Group Mask (Mask) and a preset matching value (Match) of each Group.
The cache control unit 30 divides the sram 40 into a plurality of cache units (buffer units) as a cache space and allocates the cache units to different master groups (master groups).
In the embodiment of the present invention, the cache control unit 30 may allocate cache units to different master groups (master groups) in various ways. One or more buffer units allocated to one master device group are referred to as one buffer group (buffer group).
The sram 40 provides a buffer space for data, receives an instruction from the buffer control unit 30, and divides the buffer space into a plurality of buffer units (buffer units).
And a nonvolatile memory 50 that reads and writes data in response to an instruction from the nonvolatile memory read/write controller 20.
Example 3
In an embodiment of the present invention, a control chip is further provided, including the multi-master cache control system in embodiment 2, where the multi-master cache control system groups masters when the plurality of masters access the off-chip nonvolatile memory, allocates limited on-chip cache resources to different master groups in a static manner and a dynamic manner, and adaptively adjusts according to access frequency and characteristics to optimize access efficiency and reduce read-write times of the nonvolatile memory.
Example 4
In an embodiment of the present invention, an electronic device is further provided, which includes the above-mentioned control chip, and the control chip executes the steps of the above-mentioned multi-master cache control method when running.
Example 5
In an embodiment of the present invention, a computer-readable storage medium is further provided, on which a computer program is stored, and the computer program executes the steps of the multi-master cache control method in embodiment 1.
Those of ordinary skill in the art will understand that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A multi-master cache control method comprises the following steps:
1) grouping accesses from a master device, and judging a master device group to which the accesses belong;
2) dividing a cache space into a plurality of cache units and distributing the cache units to a main equipment group;
3) receiving read access from any main equipment, and searching required data in a cache space;
4) returning the data to the master device;
the step 1) further comprises the step of judging the equipment group to which one access belongs according to two identifiers of the main equipment ID and the transmission ID, and by combining the group mask and the preset matching value of each group, wherein the specific rule is as follows:
{MID,XID} & Mask == Match(M),
the MID master equipment ID, the XID are transmission IDs, the Mask is a grouping Mask, match (M) is a preset matching value on the Mth group, and M is an integer greater than or equal to 1;
when the above condition is satisfied, the device group to which the access belongs is considered to be the mth group;
when the plurality of equipment groups all meet the condition, the equipment group with the low serial number takes effect;
configuring the number of exclusive cache units of each main equipment group in a static mode; the cache units which are not allocated are allocated to different main equipment groups in a dynamic mode, and the cache units are called dynamic cache units;
the step of dynamically allocating the cache units to different master device groups further comprises:
pre-configuring priority levels for each master device group;
maintaining the activity according to the access frequency of each main equipment group;
each dynamic cache unit records a current main equipment group;
and when the accessed data does not exist and the following conditions are met, allocating the dynamic cache unit to the main equipment group corresponding to the current access:
trans_priority + trans_active >= buffer_priority + buffer_active +reassign_margin,
wherein, trans _ priority is the priority of the master device group initiating the current access, trans _ active is the activity value of the master device group initiating the current access, buffer _ priority is the priority of the master device group to which the dynamic cache unit currently belongs, buffer _ active is the activity value of the master device group to which the dynamic cache unit currently belongs, and associativity _ margin is the preset cache reallocation margin.
2. The multi-master cache control method of claim 1,
the step 2) further comprises dividing the cache space into N equal parts, each part serving as a cache unit, and allocating the cache units to different master device groups, wherein N is an integer greater than or equal to 1; one or more cache units allocated to the master device group are a cache group.
3. The multi-master cache control method of claim 1,
the activity calculation rule is as follows:
configuring and increasing the activity value of the main equipment group according to whether each access is hit or not;
when the access is hit, the configuration liveness is increased by 0-255;
when the access is not hit and is not distributed to a new dynamic cache unit, the configuration activity is increased by 0-255;
when the access misses and is allocated to a new dynamic cache location, the liveness is cleared.
4. A multi-master cache control system employing the multi-master cache control method of claim 1 or 2, comprising,
a plurality of masters, a nonvolatile memory read/write controller, a cache control unit, a static random access memory, and a nonvolatile memory, wherein,
the plurality of masters sending access requests to the non-volatile memory read write controller over an AXI bus;
the nonvolatile storage read-write controller is used for grouping access requests of the plurality of main devices;
the cache control unit divides the static random access memory as a cache space into a plurality of cache units and distributes the cache units to different main equipment groups;
the static random access memory provides a cache space for data, receives the instruction of the cache control unit and divides the cache space into a plurality of cache units;
and the nonvolatile memory receives the instruction of the nonvolatile memory read-write controller and reads and writes data.
5. A control chip is characterized in that a control chip is provided,
the control chip comprises the multi-master cache control system of claim 4.
6. An electronic device, characterized in that,
the electronic device, comprising the control chip of claim 5.
7. A computer-readable storage medium having stored thereon a computer program, characterized in that,
the computer program when running performs the steps of the multi-master cache control method of claim 1 or 2.
CN202111518586.7A 2021-12-14 2021-12-14 Multi-master device cache control method and system Active CN113918483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111518586.7A CN113918483B (en) 2021-12-14 2021-12-14 Multi-master device cache control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111518586.7A CN113918483B (en) 2021-12-14 2021-12-14 Multi-master device cache control method and system

Publications (2)

Publication Number Publication Date
CN113918483A CN113918483A (en) 2022-01-11
CN113918483B true CN113918483B (en) 2022-03-01

Family

ID=79248776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111518586.7A Active CN113918483B (en) 2021-12-14 2021-12-14 Multi-master device cache control method and system

Country Status (1)

Country Link
CN (1) CN113918483B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009008A (en) * 2016-10-28 2018-05-08 北京市商汤科技开发有限公司 Data processing method and system, electronic equipment
CN109359063A (en) * 2018-10-15 2019-02-19 郑州云海信息技术有限公司 Caching replacement method, storage equipment and storage medium towards storage system software
CN109426623A (en) * 2017-08-29 2019-03-05 深圳市中兴微电子技术有限公司 A kind of method and device reading data
CN110275841A (en) * 2019-06-20 2019-09-24 上海燧原智能科技有限公司 Access request processing method, device, computer equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100340084C (en) * 2004-04-28 2007-09-26 联想(北京)有限公司 A method for implementing equipment group and intercommunication between grouped equipments
JP2010282405A (en) * 2009-06-04 2010-12-16 Renesas Electronics Corp Data processing system
US20170017576A1 (en) * 2015-07-16 2017-01-19 Qualcomm Incorporated Self-adaptive Cache Architecture Based on Run-time Hardware Counters and Offline Profiling of Applications
CN109074036B (en) * 2016-04-21 2021-12-31 昕诺飞控股有限公司 System and method for cloud-based monitoring and control of a physical environment
CN106604207B (en) * 2016-11-22 2020-03-17 北京交通大学 Packet-based cell access and selection method in M2M communication
CN109144898B (en) * 2017-06-19 2023-02-17 深圳市中兴微电子技术有限公司 System memory management device and system memory management method
CN110048927B (en) * 2018-01-16 2020-12-15 华为技术有限公司 Communication method and communication device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009008A (en) * 2016-10-28 2018-05-08 北京市商汤科技开发有限公司 Data processing method and system, electronic equipment
CN109426623A (en) * 2017-08-29 2019-03-05 深圳市中兴微电子技术有限公司 A kind of method and device reading data
CN109359063A (en) * 2018-10-15 2019-02-19 郑州云海信息技术有限公司 Caching replacement method, storage equipment and storage medium towards storage system software
CN110275841A (en) * 2019-06-20 2019-09-24 上海燧原智能科技有限公司 Access request processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113918483A (en) 2022-01-11

Similar Documents

Publication Publication Date Title
CN113424160B (en) Processing method, processing device and related equipment
JP3962368B2 (en) System and method for dynamically allocating shared resources
CN110134514B (en) Extensible memory object storage system based on heterogeneous memory
CN105740164B (en) Multi-core processor supporting cache consistency, reading and writing method, device and equipment
US5537635A (en) Method and system for assignment of reclaim vectors in a partitioned cache with a virtual minimum partition size
US8615634B2 (en) Coordinated writeback of dirty cachelines
US8677071B2 (en) Control of processor cache memory occupancy
CN104090847B (en) Address distribution method of solid-state storage device
US8683128B2 (en) Memory bus write prioritization
US9298621B2 (en) Managing chip multi-processors through virtual domains
US20030079087A1 (en) Cache memory control unit and method
CN104809076A (en) Management method and device of cache
US8296522B2 (en) Method, apparatus, and system for shared cache usage to different partitions in a socket with sub-socket partitioning
CN114860329B (en) Dynamic consistency bias configuration engine and method
US10990562B2 (en) System and method of asymmetric system description for optimized scheduling
CN102063386B (en) Cache management method of single-carrier multi-target cache system
US11360891B2 (en) Adaptive cache reconfiguration via clustering
CN104346404B (en) A kind of method, equipment and system for accessing data
CN106294192B (en) Memory allocation method, memory allocation device and server
CN113010453A (en) Memory management method, system, equipment and readable storage medium
CN113918483B (en) Multi-master device cache control method and system
CN113138851B (en) Data management method, related device and system
US20040193806A1 (en) Semiconductor device
US20230161714A1 (en) Method and system for direct memory access
CN111124297A (en) Performance improving method for stacked DRAM cache

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant