KR20100005539A - Cache memory system and prefetching method thereof - Google Patents
Cache memory system and prefetching method thereof Download PDFInfo
- Publication number
- KR20100005539A KR20100005539A KR1020080065625A KR20080065625A KR20100005539A KR 20100005539 A KR20100005539 A KR 20100005539A KR 1020080065625 A KR1020080065625 A KR 1020080065625A KR 20080065625 A KR20080065625 A KR 20080065625A KR 20100005539 A KR20100005539 A KR 20100005539A
- Authority
- KR
- South Korea
- Prior art keywords
- cache
- data
- address
- main memory
- requested
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0804—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with main memory updating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0862—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with prefetch
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Disclosed are a cache memory system having an improved hit rate and a method of prefetching a cache by efficiently determining the address of data that is prefetched from memory into the cache. In the present invention, the cache memory system includes a cache for storing data prefetched from the main memory, an address difference value calculator for calculating a difference value between addresses of the main memory requested from the processing apparatus, and a difference value between the addresses. And an address difference value storage unit for storing the data and a cache control unit for determining an address of data to be prefetched from the main memory based on the address difference value, and controlling the data of the determined address to be prefetched into the cache.
Description
TECHNICAL FIELD The present invention relates to cache memory. More specifically, cache access is improved by efficiently determining an address of data prefetched from main memory into a cache by using a difference value of an address of a previously requested main memory. A cache memory system and a method of prefetching a cache to have a hit rate.
A cache is a small and fast memory that temporarily stores a part of the main memory and takes advantage of the locality that appears when the address of the main memory accessed by a processing device such as a CPU is observed. Intensity is largely classified into spatial locality and temporal locality. The CPU accesses the address area of main memory nonuniformly, but the range is often concentrated in the same address space for some time. This property is called temporal intensity, which indicates that the information to be used in the near future is mainly the information currently in use. Temporal intensity mainly occurs in program loops. In addition, the address area of the main memory accessed by the CPU is often concentrated in similar addresses. This characteristic is called spatial locality.
Using this spatial and temporal density, the principle of cache is to store a small amount of very fast storage between the CPU and the main memory and store some area of the main memory that is often used there. To increase the efficiency of the cache, maximize the probability that the required data will be found in the cache, that is, maximize the hit ratio, minimize the access time to the actual cache, delay due to missed data ( It is necessary to meet requirements such as minimizing latency. In particular, prefetching is generally applied to caches to improve hit rates. Prefetching means predicting future requests and loading in advance into the cache an area of the data contained in the memory that is likely to be accessed in the future. However, prefetching is efficient when accessing a previously accessed address repeatedly, such as a do-loop in a program, but when the spatial and temporal density of data is weak, the miss rate of cache access is increased and efficient. It may not be.
The problem to be solved by the present invention is accessed by the processing apparatus by efficiently determining the address of the data to be prefetched based on the history of the address of the data previously requested by the processing apparatus for prefetching of less intensive data. A cache memory system and a method of prefetching the same improve the data access hit ratio of a cache and reduce a memory delay.
In order to solve the above problems, a cache memory system according to the present invention comprises a cache for storing data prefetched from the main memory (prefetch); An address difference value calculator for calculating a difference value between addresses of the main memory requested from a processing device; An address difference value storage unit for storing a difference value between the addresses; And a cache controller configured to determine an address of data to be prefetched from the main memory based on the difference value, and to control the data of the determined address to be prefetched into the cache.
The address difference value calculator is configured to provide a difference between a first address of the main memory requested at the first time point from the processing device and a second address of the main memory requested at the second time point before the first time point. It is desirable to calculate the value.
The cache controller may be configured to control prefetching to read data stored at an address obtained by adding the difference value to an address of the main memory most recently requested from the processing apparatus, and to store the data in the cache.
The cache controller may determine an address of data to be prefetched from the main memory by using a difference value most recently stored among a plurality of difference values stored in the address difference value storage unit.
The cache controller may determine an address of data to be prefetched from the main memory using a difference value having the highest frequency among a plurality of difference values stored in the address difference value storage unit.
The cache controller controls the data of the determined address to be prefetched into the cache when there is no request for reading data of the main memory device from the processing device or a request for writing data to the main memory device. desirable.
Preferably, the cache controller controls to store a predetermined identifier indicating whether or not data of the subline exists in the cache when requested by the processing apparatus in a predetermined area of each subline stored in the cache.
When there is no empty space in the cache, the cache controller identifies a cache line to which data of all sublines is hit by using the identifier, and replaces the data of the identified cache line with data prefetched from the main memory. It is desirable to control the storage.
If there is no empty space in the cache, the cache controller controls to replace the data of the cache line storing the data of the address farthest from the address most recently requested from the processing device with data prefetched from the main memory. It is desirable to.
The cache controller determines whether the data of the requested address exists in the cache when data of a predetermined address is requested from the processing device during the prefetch operation, so that the data of the requested address exists in the cache. The prefetch operation is performed again after providing the data of the corresponding address to the processing apparatus, and the data of the requested address is not present in the cache, but the data of the address to be prefetched is the same as the requested address. Preferably, the prefetch operation is performed to control to provide the prefetched data to the processing apparatus.
Also, a method of prefetching data into a cache according to an embodiment of the present invention includes calculating and storing a difference value between addresses of a main memory previously requested from a processing device; Determining an address of data to be prefetched from the main memory based on the difference value; And controlling the data of the determined address to be read from the main memory and stored in the cache.
According to the present invention, the hit ratio of the cache can be improved. In addition, according to the present invention, it is possible to improve the overall performance of the system using the cache by hiding the memory delay.
Hereinafter, with reference to the accompanying drawings will be described in detail a preferred embodiment of the present invention.
1 is a block diagram illustrating a configuration of a cache memory system according to an exemplary embodiment of the present invention. The
Referring to FIG. 1, the
When the
2 is a reference diagram sequentially showing addresses of the
The
As such, when the target address of the
Meanwhile, the
4 is a diagram illustrating the structure of a cache according to the present invention.
Referring to FIG. 4, the
The
5 is a reference diagram illustrating a specific configuration of a cache according to the present invention.
Referring to FIG. 5, the
6 is a flowchart illustrating a method of prefetching a cache according to the present invention.
Referring to FIG. 6, it is determined whether there is an access request from the processing device to the main memory in
In
In
In
As described above, the prefetching operation of the cache according to the present invention is preferably performed when there is no request for access to the main memory from the processing apparatus. However, if there is an access request from the processing device to the main memory after the prefetching operation is started, interrupt handling as described below is required.
7 is a flowchart illustrating a process in the case where there is an access request during the prefetching operation according to the present invention.
Referring to FIG. 7, in
If the data of the address of the requested main memory does not exist in the cache as a result of the determination of
As described above, although the present invention has been described by way of limited embodiments and drawings, the present invention is not limited to the above-described embodiments, which can be variously modified and modified by those skilled in the art to which the present invention pertains. Modifications are possible. Accordingly, the spirit of the invention should be understood only by the claims set forth below, and all equivalent or equivalent modifications will fall within the scope of the invention. In addition, the system according to the present invention can be embodied as computer readable codes on a computer readable recording medium. The computer-readable recording medium includes all kinds of recording devices in which data that can be read by a computer system is stored. Examples of the recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like, and also include a carrier wave (for example, transmission through the Internet). The computer readable recording medium can also be distributed over network coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
1 is a block diagram illustrating a configuration of a cache memory system according to an exemplary embodiment of the present invention.
2 is a reference diagram sequentially illustrating an address of the
3 is a reference diagram illustrating address difference values stored in the address
4 is a diagram illustrating the structure of a cache according to the present invention.
5 is a reference diagram illustrating a specific configuration of a cache according to the present invention.
6 is a flowchart illustrating a method of prefetching a cache according to the present invention.
7 is a flowchart illustrating a process in the case where there is an access request during the prefetching operation according to the present invention.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020080065625A KR20100005539A (en) | 2008-07-07 | 2008-07-07 | Cache memory system and prefetching method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020080065625A KR20100005539A (en) | 2008-07-07 | 2008-07-07 | Cache memory system and prefetching method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20100005539A true KR20100005539A (en) | 2010-01-15 |
Family
ID=41814910
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020080065625A KR20100005539A (en) | 2008-07-07 | 2008-07-07 | Cache memory system and prefetching method thereof |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20100005539A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012061962A1 (en) * | 2010-11-12 | 2012-05-18 | 蓝天电脑股份有限公司 | Plate-free printing transfer film |
WO2012030466A3 (en) * | 2010-08-30 | 2012-06-07 | Intel Corporation | Method and apparatus for fuzzy stride prefetch |
WO2012091234A1 (en) * | 2010-12-31 | 2012-07-05 | 세종대학교산학협력단 | Memory system including a nonvolatile memory and a volatile memory, and processing method using the memory system |
WO2014178683A1 (en) * | 2013-05-03 | 2014-11-06 | 삼성전자 주식회사 | Cache control device for prefetching and prefetching method using cache control device |
US9645934B2 (en) | 2013-09-13 | 2017-05-09 | Samsung Electronics Co., Ltd. | System-on-chip and address translation method thereof using a translation lookaside buffer and a prefetch buffer |
US9880940B2 (en) | 2013-03-11 | 2018-01-30 | Samsung Electronics Co., Ltd. | System-on-chip and method of operating the same |
-
2008
- 2008-07-07 KR KR1020080065625A patent/KR20100005539A/en not_active Application Discontinuation
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012030466A3 (en) * | 2010-08-30 | 2012-06-07 | Intel Corporation | Method and apparatus for fuzzy stride prefetch |
US8433852B2 (en) | 2010-08-30 | 2013-04-30 | Intel Corporation | Method and apparatus for fuzzy stride prefetch |
WO2012061962A1 (en) * | 2010-11-12 | 2012-05-18 | 蓝天电脑股份有限公司 | Plate-free printing transfer film |
WO2012091234A1 (en) * | 2010-12-31 | 2012-07-05 | 세종대학교산학협력단 | Memory system including a nonvolatile memory and a volatile memory, and processing method using the memory system |
US9411719B2 (en) | 2010-12-31 | 2016-08-09 | Seong University Industry Academy Cooperation Foundation | Memory system including nonvolatile and volatile memory and operating method thereof |
US10140060B2 (en) | 2010-12-31 | 2018-11-27 | Sejong University Industry Academy Cooperation Foundation | Memory system including a nonvolatile memory and a volatile memory, and processing method using the memory system |
US10558395B2 (en) | 2010-12-31 | 2020-02-11 | Sejong University Industry Academy Cooperation Foundation | Memory system including a nonvolatile memory and a volatile memory, and processing method using the memory system |
US11188262B2 (en) | 2010-12-31 | 2021-11-30 | SK Hynix Inc. | Memory system including a nonvolatile memory and a volatile memory, and processing method using the memory system |
US9880940B2 (en) | 2013-03-11 | 2018-01-30 | Samsung Electronics Co., Ltd. | System-on-chip and method of operating the same |
WO2014178683A1 (en) * | 2013-05-03 | 2014-11-06 | 삼성전자 주식회사 | Cache control device for prefetching and prefetching method using cache control device |
US9886384B2 (en) | 2013-05-03 | 2018-02-06 | Samsung Electronics Co., Ltd. | Cache control device for prefetching using pattern analysis processor and prefetch instruction and prefetching method using cache control device |
US9645934B2 (en) | 2013-09-13 | 2017-05-09 | Samsung Electronics Co., Ltd. | System-on-chip and address translation method thereof using a translation lookaside buffer and a prefetch buffer |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10740261B2 (en) | System and method for early data pipeline lookup in large cache design | |
KR102369500B1 (en) | Adaptive prefetching in a data processing apparatus | |
KR102470184B1 (en) | Cache aging policy selection for prefetch based on cache test region | |
US8433852B2 (en) | Method and apparatus for fuzzy stride prefetch | |
CN109478165B (en) | Method for selecting cache transfer strategy for prefetched data based on cache test area and processor | |
US11803484B2 (en) | Dynamic application of software data caching hints based on cache test regions | |
US20080086599A1 (en) | Method to retain critical data in a cache in order to increase application performance | |
US20180300258A1 (en) | Access rank aware cache replacement policy | |
CN113407119B (en) | Data prefetching method, data prefetching device and processor | |
US11188256B2 (en) | Enhanced read-ahead capability for storage devices | |
US20090063777A1 (en) | Cache system | |
US20150143045A1 (en) | Cache control apparatus and method | |
KR20100005539A (en) | Cache memory system and prefetching method thereof | |
US20120124291A1 (en) | Secondary Cache Memory With A Counter For Determining Whether to Replace Cached Data | |
WO2023173991A1 (en) | Cache line compression prediction and adaptive compression | |
US20120066456A1 (en) | Direct memory access cache prefetching | |
US8356141B2 (en) | Identifying replacement memory pages from three page record lists | |
US10997077B2 (en) | Increasing the lookahead amount for prefetching | |
WO2023173995A1 (en) | Cache line compression prediction and adaptive compression | |
KR102692838B1 (en) | Enhanced read-ahead capability for storage devices | |
US11449428B2 (en) | Enhanced read-ahead capability for storage devices | |
US10776043B2 (en) | Storage circuitry request tracking | |
US11048637B2 (en) | High-frequency and low-power L1 cache and associated access technique | |
CN117120989A (en) | Method and apparatus for DRAM cache tag prefetcher | |
WO2021059198A1 (en) | Circuitry and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WITN | Withdrawal due to no request for examination |