[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN102799538A - Cache replacement algorithm based on packet least recently used (LRU) algorithm - Google Patents

Cache replacement algorithm based on packet least recently used (LRU) algorithm Download PDF

Info

Publication number
CN102799538A
CN102799538A CN2012102749311A CN201210274931A CN102799538A CN 102799538 A CN102799538 A CN 102799538A CN 2012102749311 A CN2012102749311 A CN 2012102749311A CN 201210274931 A CN201210274931 A CN 201210274931A CN 102799538 A CN102799538 A CN 102799538A
Authority
CN
China
Prior art keywords
group
state
cache
priority
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012102749311A
Other languages
Chinese (zh)
Inventor
衣晓飞
李永进
邓让钰
晏小波
周宏伟
张英
窦强
曾坤
谢伦国
马卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN2012102749311A priority Critical patent/CN102799538A/en
Publication of CN102799538A publication Critical patent/CN102799538A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a cache replacement algorithm based on a packet least recently used (LRU) algorithm. The cache replacement algorithm comprises the following steps of: (1) setting a global rotary priority register global_state, wherein the global_state serves as a mark of each group of rotary priorities in a Cache, and if a position is 1, the group corresponding to the position has the highest priority; setting a partial rotary priority register group_state, namely setting a partial rotary priority register group_state to each path of each group in the Cache, and the group_state serves as a mark of each path of rotary priority in the group; and (2) during replacement, searching whether each group in the Cache has a request, if so, selecting the group with the highest priority according to the own priority; and in the group with the highest priority, searching whether each path has a request, and if so, selecting the path with the highest priority as an object to be replaced. The cache replacement algorithm has the advantages that the cache has relatively high hit rate, and the cost for implementing the hardware is relatively low.

Description

A kind of Cache replacement algorithm based on grouping LRU
Technical field
The present invention is mainly concerned with the microprocessor Design field, refers in particular in a kind of microprocessor Design the Cache replacement algorithm based on grouping LRU.
Background technology
In the microprocessor Design, its storage system often adopts cache to reduce the memory access delay in modern times.In the design of cache, replacement policy affects the hit rate of cache.At present, employed replacement at random, first in first out (FIFO), minimum frequent use (LFU) and the strategies such as least recently used (LRU) of mainly containing in the prior art.Wherein, The replacement policy of " first in first out " is being sought aspect the replacement object least may use than how much not good replacement is at random; And the LRU strategy is only in the major applications environment, because each visit locality in time that it is paid close attention to.And the LFU strategy can be retained in the data of often using among the cache, so also effect preferably can be arranged under some situation.
Because the cost that LRU realizes is bigger, in the implementation procedure of reality, does not often adopt real LRU, but adopt certain pseudo-lru algorithm.As shown in Figure 2, be a typical cache replacement algorithm, replacement occurs in the step of selecting the victim opportunity.Traditional algorithm is that a LRU weighted value is set for each cache is capable; In general; This weighted value is the capable time of behind last visit, running off of this cache, calculates the LRU weighted value on each road during replacement, and with weighted value maximum evict cache from as the victim.
Summary of the invention
The technical matters that the present invention will solve just is: to the technical matters that prior art exists, the present invention provides a kind of cache that can keep to have higher hit rate and the less Cache replacement algorithm based on grouping LRU of hard-wired cost.
For solving the problems of the technologies described above, the present invention adopts following technical scheme:
A kind of Cache replacement algorithm based on grouping LRU the steps include:
(1) wheel that the overall situation is set changes priority register global_state:global_state as each organizes the sign that wheel changes priority among the Cache, if one of them position is 1 then is representing the group corresponding with this position to have limit priority; After every generation was once replaced, global_state wanted one of ring shift left;
Local wheel is set changes priority register group_state: the wheel commentaries on classics priority register group_state that a part is set for each road of each group among the Cache; Group_state changes the sign of priority as every road wheel in the group, if one of them position is 1 then is representing the road corresponding with this position to have limit priority;
(2) when carrying out replacement operation, search at first whether each group has request among the Cache, if there is request just to select the group of limit priority according to priority separately; Whether in the group of limit priority, searching each road has request, if request is arranged then select a road of limit priority, as the object that will swap out.
Compared with prior art, the invention has the advantages that: the present invention is based in the Cache replacement algorithm of grouping LRU, no matter how many groups cache is divided into, and comprises how many roads in every group, is these groups local register and a global register are set.Compare with the implementation of other pseudo-LRU, the used status register of the present invention is less, and this hardware logic realizes that time-delay is little, can be used in the high-frequency design, and the present invention can keep cache to have higher hit rate.
Description of drawings
Fig. 1 is the schematic flow sheet of the inventive method.
Fig. 2 is the mapping relations synoptic diagram between set associative cache and the memory block.
Fig. 3 is the lru algorithm schematic flow sheet of local cache visit.
Fig. 4 is the lru algorithm schematic flow sheet of snoop accesses among the present invention.
Fig. 5 is the synoptic diagram of the cache of one 16 road set associative in the concrete application example.
Fig. 6 is the local init state synoptic diagram that changes priority register and overall situation wheel commentaries on classics priority register of taking turns in the concrete application example.
Fig. 7 is the buffer status synoptic diagram of replacement during algorithm operating in the concrete application example.
Fig. 8 is the buffer status synoptic diagram of replacement during algorithm operating in the concrete application example.
Embodiment
Below will combine Figure of description and specific embodiment that the present invention is explained further details.
As shown in Figure 2, for the cache structure of set associative, each data block in the internal memory can be mapped to any one tunnel in the cache group.The number on an interior road of group is called as the set associative degree.Cache among the figure is the cache of two-way set associative.Piece number is 0,4,8,12 ... Be mapped in the two-way in 0 group; Piece number is 1,5,9,13 ... Be mapped in 1 group the two-way, piece number is 2,6,10,14 ... Be mapped in 2 groups the two-way; In like manner, piece number is 3,7,11,15 ... Be mapped in 3 groups the two-way.The present invention promptly is when having new data block to get into the cache structure, can which data block in the two-way be swapped out.Be appreciated that if not two-way to link, but 16 road set associatives the present invention swaps out which data block in 16 tunnel.
As shown in Figure 3, local cache visit checks according to its address of visiting whether this address hits in the cache structure.If hit, then can upgrade the state of LRU.If this instruction is load or store instruction, LRU state that then can this cache is capable is changed to 1; If do no-operation instruction (no-op), then need the LRU mode bit zero clearing that this cache is capable, afterwards data are returned to the requesting party.After having upgraded the LRU state, can check the LRU state on all roads, if the LRU position on all roads is 1, then with the statistics zero clearing of LRU position.If cache is not hit in local cache visit, then can send the message that obtains data to next stage cache or main memory, after data return, these data are filled into cache, and the LRU position is set to 1; Afterwards data are returned to the requesting party.
As shown in Figure 4, interception request comes from following catalog control device.After interception request arrived cache, whether inspection hit in cache, if do not hit, then can return to monitor and reply, and not have data to return.If hit, look at then whether this snoop-operations causes the capable calcellation of this cache, if cause the capable calcellation of this cache, then that this cache is capable LRU position zero clearing is returned at last to monitor and is replied, data are with in decision according to interception type.
As shown in Figure 1, the Cache replacement algorithm based on grouping LRU of the present invention is:
(1) wheel that the overall situation is set changes priority register global_state:global_state as each organizes the sign that wheel changes priority among the Cache, if one of them position is 1 then is representing the group corresponding with this position to have limit priority; After every generation was once replaced, global_state wanted one of ring shift left;
Local wheel is set changes priority register group_state: the wheel commentaries on classics priority register group_state that a part is set for each road of each group among the Cache; Group_state changes the sign of priority as every road wheel in the group, if one of them position is 1 then is representing the road corresponding with this position to have limit priority;
(2) when carrying out replacement operation, search at first whether each group has request among the Cache, if there is request just to select the group of limit priority according to priority separately; Whether in the group of limit priority, searching each road has request, if request is arranged then select a road of limit priority, as the object that will swap out.
The present invention is fit to the cache of various degree of association, with in the cache of one 16 road set associative example, the Cache replacement algorithm of grouping LRU of the present invention is described; Certainly, method of the present invention not only is applicable to the cache of 16 road set associatives, is equally applicable to the cache of various degree of association such as 4 tunnel, 8 tunnel, 32 tunnel.The Cache replacement algorithm of grouping LRU need select 1 the tunnel to replace away from 16 the tunnel.The concrete steps that Cache based on grouping LRU of the present invention replaces algorithm are:
1, for each cache line a Used position is set.The implication of Used position is to be used recently, data is filled in the cache at every turn, need be with Used position 1; Data are replaced away at every turn, need the Used position is clear 0, and when each this cache of visit is capable, put 1.In addition, if all 16 tunnel the Used positions in a group all are, represent that then each road all uses at 1 o'clock, unification at this moment generates the Used position according to new visit track more again with the Used position zero clearing on all roads.As shown in Figure 5, in concrete application example, be the synoptic diagram of the cache of one 16 road set associative.Be divided into four groups with 16 the tunnel, wherein 0,1,2, the 3 tunnel be divided into one group, be called 0 group; 4,5,6, the 7 tunnel are divided into one group, are called 1 group; 8,9,10, the 11 tunnel are divided into one group, are called 2 groups; 12,13,14, the 15 tunnel are divided into one group, are called 3 groups.
2,16 tunnel in the cache structure is divided into four groups, wherein every group four the tunnel.The wheel that the overall situation is set changes priority register global_state [3:0], and global_state changes the sign of priority as four groups of wheels, is one 4 one-hot signal, and certain position 1 means that this group has limit priority.After every generation was once replaced, global_state [3:0] wanted one of ring shift left.The limit priority of replacement next time is exactly other one group like this.In the time of initialized, the initial value of global_state register is 4 ' b0001, representes that the 0th group has the highest priority.
3, the wheel commentaries on classics priority register group_state of a part is set for four tunnel of each group, is respectively group0_state [3:0], group1_state [3:0], group2_state [3:0], group3_state [3:0].Group_state changes the sign of priority as four road wheels in the group, is one 4 one-hot signal, and certain position 1 means that this road has limit priority.After every generation was once replaced, group_state [3:0] wanted one of ring shift left.The limit priority of replacement next time has been exactly other one the tunnel like this.In the time of initialized, the initial value of group_state register is 4 ' b0001, representes that the 0 the tunnel has the highest priority.
As shown in Figure 6, for local wheel commentaries on classics priority register and overall situation wheel in the concrete application example change the init state synoptic diagram of priority register.Wherein, group0_state is that one 4 part wheel changes priority register, the corresponding replacement priority on each road in 0 group among the cache.Wherein be that 1 bit representation has the highest replacement priority, the priority among the figure is the priority after the initialization, and its value is 4 ' b0001, representes that 0 tunnel priority is the highest, and equal conditions should be replaced away prior to other roads in this group down.In like manner, the implication of group1_state is 4,5,6,7 tunnel a replacement priority in 1 group among the cache; The implication of group2_state is 8,9,10,11 tunnel a replacement priority in 2 groups among the cache; The implication of group3_state is 12,13,14,15 tunnel a replacement priority in 3 groups among the cache.Global_state's is that overall situation wheel changes priority register.Global_state also is one 4 a priority register, among corresponding the cache 0 group, and 1 group, the replacement priority of 2 groups and 3 groups, wherein certain is that 1 certain group of expression has the highest replacement priority.Priority among the figure is the priority after the initialization, and value is 4 ' b0001, representes that 0 group replacement priority is the highest, if every group all has under the situation that can be replaced the road, preferentially replaces away certain road of 0 group.
4, concrete selection algorithm is following in the group:
4.1, select 0 tunnel condition to be divided into 4 kinds:
(1) 0 tunnel used position is 0, and group0_state [3:0]=4 ' b0001;
(2) 0 tunnel used position is 0, and 1,2,3 tunnel used position is 1, and group0_state [3:0]=4 ' b0010;
(3) 0 tunnel used position is 0, and 2,3 tunnel used position is 1, and group0_state [3:0]=4 ' b0100;
(4) 0 tunnel used position is 0, and 3 tunnel used position is 1, and group0_state [3:0]=4 ' b1000;
4.2, select 1 tunnel condition to be divided into 4 kinds:
(1) 1 tunnel used position is 0, and group1_state [3:0]=4 ' b0010;
(2) 1 tunnel used position is 0, and 2,3,0 tunnel used position is 1, and group1_state [3:0]=4 ' b0100;
(3) 1 tunnel used position is 0, and 3,0 tunnel used position is 1, and group1_state [3:0]=4 ' b1000;
(4) 1 tunnel used position is 0, and 0 tunnel used position is 1, and group1_state [3:0]=4 ' b0001;
4.3, select 2 tunnel condition to be divided into 4 kinds:
(1) 2 tunnel used position is 0, and group2_state [3:0]=4 ' b0100;
(2) 2 tunnel used position is 0, and 3,0,1 tunnel used position is 1, and group2_state [3:0]=4 ' b1000;
(3) 2 tunnel used position is 0, and 0,1 tunnel used position is 1, and group2_state [3:0]=4 ' b0001;
(4) 2 tunnel used position is 0, and 1 tunnel used position is 1, and group2_state [3:0]=4 ' b0010;
4.4, select 3 tunnel condition to be divided into 4 kinds:
(1) 3 tunnel used position is 0, and group3_state [3:0]=4 ' b1000;
(2) 3 tunnel used position is 0, and 0,1,2 tunnel used position is 1, and group3_state [3:0]=4 ' b0001;
(3) 3 tunnel used position is 0, and 1,2 tunnel used position is 1, and group3_state [3:0]=4 ' b0010;
(4) 3 tunnel used position is 0, and 2 tunnel used position is 1, and group3_state [3:0]=4 ' b0100;
5, the selection algorithm between the group is following:
5.1, select 0 group condition to be divided into 4 kinds:
(1) 0 group four tunnel used position has at least one to be 0, and global_state [3:0]=4 ' b0001;
(2) 0 groups four tunnel used position has at least one to be 0, and 1,2,3 groups used position is 1, and global_state [3:0]=4 ' b0010;
(3) 0 groups four tunnel used position has at least one to be 0, and 2,3 groups used position is 1, and global_state [3:0]=4 ' b0100;
(4) 0 groups four tunnel used position has at least one to be 0, and 3 groups used position is 1, and global_state [3:0]=4 ' b1000;
5.2, select 1 group condition to be divided into 4 kinds:
(1) 1 group used has at least one to be 0, and global_state [3:0]=4 ' b0010;
(2) 1 groups used has at least one to be 0, and 2,3,0 group used position is 1, and global_state [3:0]=4 ' b0100;
(3) 1 groups used has at least one to be 0, and 3,0 groups used position is 1, and global_state [3:0]=4 ' b1000;
(4) 1 groups used has at least one to be 0, and 0 group used position is 1, and global_state [3:0]=4 ' b0001;
5.3, select 2 groups condition to be divided into 4 kinds:
(1) 2 group used position has at least one to be 0, and global_state [3:0]=4 ' b0100;
(2) 2 groups used position has at least one to be 0, and 3,0,1 group used position is 1, and global_state [3:0]=4 ' b1000;
(3) 2 groups used position has at least one to be 0, and 0,1 group used position is 1, and global_state [3:0]=4 ' b0001;
(4) 2 groups used position has at least one to be 0, and 1 group used position is 1, and global_state [3:0]=4 ' b0010;
5.4, select 3 groups condition to be divided into 4 kinds:
(1) 3 group used position has at least one to be 0, and global_state [3:0]=4 ' b1000;
(2) 3 groups used position has at least one to be 0, and 0,1,2 groups used position is 1, and global_state [3:0]=4 ' b0001;
(3) 3 groups used position has at least one to be 0, and 1,2 group used position is 1, and global_state [3:0]=4 ' b0010;
(4) 3 groups used position has at least one to be 0, and 2 groups used position is 1, and global_state [3:0]=4 ' b0100.
As shown in Figure 7, be the buffer status synoptic diagram during a replacement algorithm operating in the concrete application example.According to global_state [3:0]=4 ' b0010, and 1 group used [7:4]=4 ' b0110, algorithm between the utilization group can select 1 group to be the replacement group.According to group1_state [3:0]=4 ' b0100,, can judge that 7 the tunnel should be replaced away again according to algorithm in the group.
As shown in Figure 8, the buffer status synoptic diagram during for replacement algorithm operating in another concrete application example.According to global_state [3:0]=4 ' b0010, and 2 groups used [11:8]=4 ' b0010, and 1 group used position is 1, and algorithm between the utilization group can select 2 groups to be the replacement group.According to group2_state [3:0]=4 ' b0001,, can judge that 8 the tunnel should be replaced away again according to algorithm in the group.
Below only be preferred implementation of the present invention, protection scope of the present invention also not only is confined to the foregoing description, and all technical schemes that belongs under the thinking of the present invention all belong to protection scope of the present invention.Should be pointed out that for those skilled in the art some improvement and retouching not breaking away under the principle of the invention prerequisite should be regarded as protection scope of the present invention.

Claims (1)

1. the Cache based on grouping LRU replaces algorithm, it is characterized in that step is:
(1) wheel that the overall situation is set changes priority register global_state:global_state as each organizes the sign that wheel changes priority among the Cache, if one of them position is 1 then is representing the group corresponding with this position to have limit priority; After every generation was once replaced, global_state wanted one of ring shift left;
Local wheel is set changes priority register group_state: the wheel commentaries on classics priority register group_state that a part is set for each road of each group among the Cache; Group_state changes the sign of priority as every road wheel in the group, if one of them position is 1 then is representing the road corresponding with this position to have limit priority;
(2) when carrying out replacement operation, search at first whether each group has request among the Cache, if there is request just to select the group of limit priority according to priority separately; Whether in the group of limit priority, searching each road has request, if request is arranged then select a road of limit priority, as the object that will swap out.
CN2012102749311A 2012-08-03 2012-08-03 Cache replacement algorithm based on packet least recently used (LRU) algorithm Pending CN102799538A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012102749311A CN102799538A (en) 2012-08-03 2012-08-03 Cache replacement algorithm based on packet least recently used (LRU) algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012102749311A CN102799538A (en) 2012-08-03 2012-08-03 Cache replacement algorithm based on packet least recently used (LRU) algorithm

Publications (1)

Publication Number Publication Date
CN102799538A true CN102799538A (en) 2012-11-28

Family

ID=47198651

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012102749311A Pending CN102799538A (en) 2012-08-03 2012-08-03 Cache replacement algorithm based on packet least recently used (LRU) algorithm

Country Status (1)

Country Link
CN (1) CN102799538A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885890A (en) * 2012-12-21 2014-06-25 华为技术有限公司 Replacement processing method and device for cache blocks in caches
CN104516827A (en) * 2013-09-27 2015-04-15 杭州信核数据科技有限公司 Cache reading method and device
CN106383792A (en) * 2016-09-20 2017-02-08 北京工业大学 Missing perception-based heterogeneous multi-core cache replacement method
CN106909987A (en) * 2017-01-23 2017-06-30 杭州电子科技大学 A kind of mixing bicycle distribution method based on using load balancing and life-span optimization
CN108108312A (en) * 2016-11-25 2018-06-01 华为技术有限公司 A kind of cache method for cleaning and processor
WO2022226770A1 (en) * 2021-04-27 2022-11-03 深圳市大疆创新科技有限公司 Method and apparatus for accessing cache lines
CN116644008A (en) * 2023-06-16 2023-08-25 合芯科技有限公司 Cache replacement control method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5452440A (en) * 1993-07-16 1995-09-19 Zitel Corporation Method and structure for evaluating and enhancing the performance of cache memory systems
US20040215890A1 (en) * 2003-04-28 2004-10-28 International Business Machines Corporation Cache allocation mechanism for biasing subsequent allocations based upon cache directory state
CN101751245A (en) * 2010-01-18 2010-06-23 北京龙芯中科技术服务中心有限公司 Processor Cache write-in invalidation processing method based on memory access history learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5452440A (en) * 1993-07-16 1995-09-19 Zitel Corporation Method and structure for evaluating and enhancing the performance of cache memory systems
US20040215890A1 (en) * 2003-04-28 2004-10-28 International Business Machines Corporation Cache allocation mechanism for biasing subsequent allocations based upon cache directory state
CN101751245A (en) * 2010-01-18 2010-06-23 北京龙芯中科技术服务中心有限公司 Processor Cache write-in invalidation processing method based on memory access history learning

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103885890A (en) * 2012-12-21 2014-06-25 华为技术有限公司 Replacement processing method and device for cache blocks in caches
CN103885890B (en) * 2012-12-21 2017-04-12 华为技术有限公司 Replacement processing method and device for cache blocks in caches
CN104516827A (en) * 2013-09-27 2015-04-15 杭州信核数据科技有限公司 Cache reading method and device
CN104516827B (en) * 2013-09-27 2018-01-30 杭州信核数据科技股份有限公司 A kind of method and device of read buffer
CN106383792A (en) * 2016-09-20 2017-02-08 北京工业大学 Missing perception-based heterogeneous multi-core cache replacement method
CN106383792B (en) * 2016-09-20 2019-07-12 北京工业大学 A kind of heterogeneous polynuclear cache replacement method based on missing perception
CN108108312A (en) * 2016-11-25 2018-06-01 华为技术有限公司 A kind of cache method for cleaning and processor
CN106909987A (en) * 2017-01-23 2017-06-30 杭州电子科技大学 A kind of mixing bicycle distribution method based on using load balancing and life-span optimization
CN106909987B (en) * 2017-01-23 2020-08-11 杭州电子科技大学 Hybrid bicycle distribution method based on usage load balancing and service life optimization
WO2022226770A1 (en) * 2021-04-27 2022-11-03 深圳市大疆创新科技有限公司 Method and apparatus for accessing cache lines
CN116644008A (en) * 2023-06-16 2023-08-25 合芯科技有限公司 Cache replacement control method and device
CN116644008B (en) * 2023-06-16 2023-12-15 合芯科技有限公司 Cache replacement control method and device

Similar Documents

Publication Publication Date Title
CN102799538A (en) Cache replacement algorithm based on packet least recently used (LRU) algorithm
US7949829B2 (en) Cache used both as cache and staging buffer
US8140759B2 (en) Specifying an access hint for prefetching partial cache block data in a cache hierarchy
US9424194B2 (en) Probabilistic associative cache
CN104169892A (en) Concurrently accessed set associative overflow cache
TWI811484B (en) Cache systam and method for allocating data evicted from first cache to second cache
KR102575913B1 (en) Asymmetric set combined cache
US12014206B2 (en) Pipeline arbitration
US12086064B2 (en) Aliased mode for cache controller
KR101509628B1 (en) Second chance replacement mechanism for a highly associative cache memory of a processor
US8621152B1 (en) Transparent level 2 cache that uses independent tag and valid random access memory arrays for cache access
CN112540939A (en) Storage management device, storage management method, processor and computer system
US11301250B2 (en) Data prefetching auxiliary circuit, data prefetching method, and microprocessor
CN112631962A (en) Storage management device, storage management method, processor and computer system
US20090106496A1 (en) Updating cache bits using hint transaction signals
CN104731720B (en) The connected secondary memory managing device of group
US20090210628A1 (en) Computer Cache System With Stratified Replacement
US9934150B1 (en) Data caching circuit and method
US8756362B1 (en) Methods and systems for determining a cache address
CN1308842C (en) High speed cache data replacing system and its determination
US7685372B1 (en) Transparent level 2 cache controller
US11609858B2 (en) Bypass predictor for an exclusive last-level cache
CN107272496A (en) A kind of MCU storage systems and implementation method
Hammoud et al. An intra-tile cache set balancing scheme
Olorode et al. Replacement techniques for improving performance in sub-block caches

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20121128