EP0365117A2 - Data-processing apparatus including a cache memory - Google Patents
Data-processing apparatus including a cache memory Download PDFInfo
- Publication number
- EP0365117A2 EP0365117A2 EP89307983A EP89307983A EP0365117A2 EP 0365117 A2 EP0365117 A2 EP 0365117A2 EP 89307983 A EP89307983 A EP 89307983A EP 89307983 A EP89307983 A EP 89307983A EP 0365117 A2 EP0365117 A2 EP 0365117A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- cache
- cache memory
- address
- memory
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000015654 memory Effects 0.000 title claims abstract description 77
- 238000013519 translation Methods 0.000 claims abstract description 14
- 238000003672 processing method Methods 0.000 claims 3
- 230000014616 translation Effects 0.000 description 7
- 238000003491 array Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 1
- 238000013479 data entry Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000000034 method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/10—Address translation
- G06F12/1027—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
- G06F12/1045—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
- G06F12/1063—Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache the data cache being concurrently virtually addressed
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0815—Cache consistency protocols
- G06F12/0831—Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
Definitions
- the cache may potentially be accessed either with a real (or physical) address, or a virtual address.
- the advantage of using the virtual address is that it is not necessary to wait for the address to be translated before accessing the cache, and hence the cache access is faster.
- the address has to be translated only if the required data item is not present in the cache.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Description
- This invention relates to data processing apparatus and, more specifically, is concerned with a data processing apparatus including a cache memory.
- It is well known to provide a two-level memory system, consisting of a main memory and a smaller, faster cache memory. In operation, the cache is arranged to hold copies of data items from the main memory that are currently in use, or are likely to be required in the near future, so that these items can be accessed rapidly, without the delay of a main memory access. Such memories are described, for example, in "Cache Memories" by A.J. Smith, ACM Computing Surveys, September 1982, page 473.
- As described on page 479 of the above Computing Surveys article, in a computer system with virtual memory, the cache may potentially be accessed either with a real (or physical) address, or a virtual address. The advantage of using the virtual address is that it is not necessary to wait for the address to be translated before accessing the cache, and hence the cache access is faster. The address has to be translated only if the required data item is not present in the cache.
- The translation of the virtual address may conventionally be performed by a memory management unit (MMU) comprising an associatively addressed memory holding address translation information (e.g. page table entries) for recently used virtual addresses. If the required address translation information is not present in the MMU, then the main memory is accessed, to read the required page table entry. Possibly several main memory accesses are required to translate an address.
- A problem with this is that, since main memory access is relatively slow, the address translation process can take a relatively long time. The object of the present invention is to overcome this problem.
- According to the invention there is provided a data processing apparatus comprising
- (a) a data processing unit,
- (b) a cache memory, addressable by means of a virtual address from the processing unit, so as to access data items from the cache memory,
- (c) a memory management unit, for translating virtual addresses from the processing unit into physical addresses, and
- (d) a main memory, addressable by the physical address from the memory management unit,
- A data processing system embodying a data memory system in accordance with the invention will now be described by way of example with reference to the accompanying drawings.
- Figure 1 is an overall block diagram of the data processing system.
- Figure 2 is a block diagram of one of the processing modules of the system.
- Figure 3 is a block diagram of a cache memory forming part of each of the processing modules.
- Figure 4 is a flow chart illustrating the operation of the cache memory.
- Figure 5 is a flow chart illustrating the operation of a snoop logic unit forming part of each of the processing modules.
- Referring to Figure 1, the data processing system comprises a plurality of
data processing modules 10, and amain memory 11, interconnected by a high-speed bus 12. - In operation, any one of the processing modules can acquire ownership of the bus for the purpose of initiating a bus transaction e.g. a read or write over the
bus 12 to thememory module 11. Ownership of the bus is acquired by a bus arbitration scheme, details of which are not relevant to the present invention. - Referring now to Figure 2, this shows one of the
processing modules 10 in more detail. - The processing module comprises a processing unit (CPU) 20.
- The processing module also includes a
cache memory 21. This is a relative small, fast-access memory, compared with themain memory 11, and holds local copies of data items, for rapid access by the processing unit. In operation, when the processing unit requires to access a data item, for reading or writing, it generates the virtual address VA of the item, on anaddress path 22. The virtual address VA is applied to thecache 21. If the required data item is present in the cache, a HIT is indicated, allowing the processing unit to read or write the data item, by way of thedata path 23. - If, on the other hand, the required data item is not present in the cache, it must be accessed from the main memory and loaded into the cache. The main memory is addressed by means of a physical address PA, which is derived from the virtual address VA by means of a memory management unit (MMU) 24. The physical address is sent over the
bus 11 by way of aninterface unit 25. - The construction and operation of the
MMU 24 may be conventional and so need not be described in detail. Briefly, the MMU 24 uses a small associatively addressed memory (not shown) to hold copies of recently used page table entries. If the required page table entry is present, it can be accessed immediately, and used to form the required physical address of the data. If, on the other hand, the page table entry is not present, the MMU generates the physical addresses for searching a hierarchically organised page table structure to access the required entry. The retreived page table entry is then loaded into the associative memory on the MMU, so that it is available for future translations. The MMU may have to perform several page table accesses to obtain the required page table entry. - In a conventional system, the physical address of each page table entry is used to access the page tables in the main memory of the system. However, in the present case, the physical address PA of the page table entry is stored in a buffer 27 (Figure 3), and is then used to address the
cache 21. If the required page table entry is held in the cache (because it has been accessed previously), then it can be retrieved rapidly from the cache and returned to the MMU. If, on the other hand, the page table entry is not in the cache, then the main memory 1 must be accessed to obtain the required table entry. The page table entry is then loaded into thecache 21. - Hence, it can be seen that the
cache 21 can be addressed in two ways:
(i) by the virtual address from theCPU 20, to access data, and
(ii) by the physical address from theMMU 24, to access page table entries for address translation. - Allowing the MMU to access page table entries in the cache in this way greatly speeds up the operation of the MMU, and hence improves the efficiency of the system.
- As will be shown, the physically addressed items in the cache are distinguished from the virtually addressed items by having a specially reserved context tag value.
- The processing module also includes a
snoop logic unit 26 whose purpose is to ensure coherency between the contents of thecache 21 and the caches in the other processing modules. Thesnoop logic 26 is an associative memory which stores as tags the physical addresses of all the data (or page table entries) currently resident in thecache 21. The snoop logic receives all the physical addresses appearing on the high speed bus from all the processing modules, and compares each received address with the stored physical address tags. If the received address matches any one of the stored physical addresses, the snoop logic generates the corresponding virtual address, and applies it to thecache 21 so as to access the corresponding line of data. - The operation of the
snoop logic unit 26 will be described in more detail later. - Referring now to Figure 3, this shows the
cache 21 in more detail. - In this figure, the virtual address VA is shown as consisting of the 32 bits VA 0-31, where VAO is the least significant bit, and the physical address PA stored in the
buffer 27 is shown as consisting of bits PA 0-31.Multiplexers - The
cache 21 comprises adata array 30, which is a random-access memory (RAM) holding 16K double words, each double word consisting of 64 bits (8 bytes). Thedata array 30 is addressed by bits VA 3-16 (or PA3-16), so as to access one double word. This double word can then be written or read, by way of thehigh speed bus 12, or theprocessor data path 23. - The data array is regarded as being organised into 4K lines of data, each line containing four double words (32 bytes). Bits VA 5-16 (or PA 5-16) select one line of the cache, while bits VA 3-4 (or PA 3-4) select one double word within this line.
- The cache also comprises an
address tag array 31, acontext tag array 32, astatus array 33, and a copy-back address array 34. Each of these arrays comprises a RAM, having 4K locations, one for each line of data in the cache. These arrays are all addressed in parallel by bits VA 5-16 (or PA 5-16), so that whenever a line of the data array is accessed, the corresponding location in each of the arrays 31-34 is also accessed. - The
address tag array 31 holds a 15-bit address tag for each line in the cache. Whenever a new line is loaded into the data array, bits VA 17-31 (or PA 17-31) of its address are written into thearray 31 as the address tag for that line. Correspondingly, whenever a line is accessed in the data array, its address tag is compared with address bits VA 17-31 or PA17-31. If they are equal, a VT MATCH signal is produced. - The cache also comprises two registers, referred to as the current context register 35 and the MMU context register 36.
- The current context register 35 is accessible by the
processing unit 20. Whenever theprocessing unit 20 initiates a new program, it allocates a 16-bit context number to that program. This context number can have any of a range of values, excluding a predetermined value, which is reserved for the MMU. Whenever the processing unit starts to execute a program to which a context number has been allocated, it loads that context number into thecurrent context register 35. - The MMU context register 36 holds a preset value, equal to the reserved context number of the MMU. A
multiplexer 37 selects between the outputs of the tworegisters - The
context tag array 32 holds a 16-bit context tag for each line of the cache. The data input of this array is connected to the output of themultiplexer 37. Whenever a new line of data or page table entry is loaded into the cache, the contents of theregister register - Both the
tag arrays - The
status array 33 holds three status bits for each line of the cache. These indicate the state of the corresponding line, as follows.Status bits State 000 INVALID 001 PRIVATE 011 MODIFIED 101 SHARED. - PRIVATE means that the data in the line is not shared with any of the other processing modules, and that it has not been modified (updated) since it was loaded into the cache.
- MODIFIED means that the data in the line is not shared, and that it has been modified. It is therefore the most up-to-date copy in the system of this data.
- SHARED means that a copy of this data is also held in at least one other processing module, and that it has not been modified.
- The outputs of the arrays 31-33 are all fed to a cache control logic unit 38, which controls the operation of the cache as will be described.
- The copy-
back address array 34 holds a 19-bit physical address PA for each line in the cache. This indicates the physical address to which the line will eventually be copied back. - Referring now to Figure 4, this illustrates the operation of the
cache 21. - As mentioned above, whenever the
processing unit 20 requires to access a data item, it applies the virtual address VA of the data to thecache 21, so as to access a line of the cache. If VT MATCH and CT MATCH are both true, and if the line is valid (i.e. the status bits are not equal to 000) then a HIT is scored, indicating that the required data item is present in the addressed line of the cache. Otherwise, a MISS is scored. - The operation of the cache depends on whether a HIT or a MISS is scored, and on whether this is a READ or WRITE operation, as follows.
- (1) READ HIT. In this case, the data can be accessed immediately from the cache. The status of the cache line is not changed.
- (2) READ MISS. In this case, the required data must be fetched from the main store, and loaded into the cache, overwriting the existing line of the cache. If the existing line is in the MODIFIED state, it must first be copied back to the main memory, so as to ensure that the most up-to-date copy of the data is preserved. This is achieved by means of a block write transaction over the high speed bus. The required data is then fetched from the main memory by means of a block read transaction over the high speed bus, and loaded into the cache. The status of the new block is set either to SHARED or PRIVATE, according to whether or not this line is already present in the cache of another processing module, as indicated by a "shared" status line of the bus.
- (3.) WRITE HIT. If the current status of the cache line is PRIVATE, the data is written into the cache, and the status is set to MODIFIED. If the status is already MODIFIED, the write proceeds without delay and there is no state change. If the cache line status is SHARED, then the physical address on the line is broadcast over the bus to the other processing modules, so that they can invalidate the corresponding line in their caches, to ensure cache coherency.This is referred to as a broadcast invalidate operation. The data is written into the cache and the cache line status set to MODIFIED.
- (4.) WRITE MISS. In this case, the cache follows the sequence for read miss described above, followed by the sequence for write hit.
- The operation of the cache is similar when it is addressed by the physical address PA from the MMU in order to access a page table entry.
- Referring now to Figure 5, the operation of the snoop
logic unit 23 is as follows. If the snoop logic detects a match during a broadcast invalidate operation by another processing module, it sets the status of the addressed cache line to INVALID. This ensures cache coherency. - If, on the other hand, the snoop logic detects a match during a block read transaction, instead of during a broadcast invalidate, it checks the states of the data line in the
cache 21. If the status of the cache line is MODIFIED, the snoop logic initiates an INTERVENTION operation. This causes the block read transaction to be temporarily suspended, while the data line is copied back to the main memory. The block read transaction is then allowed to continue. This ensures that the most up-to-date copy of the data is available in the main memory for the block read transaction. - It should be noted that the snoop logic monitors block read transactions generated by all the processing modules, including the module in which the snoop logic is located. This latter possibility is referred to as a "self-snoop" operation, its purpose being to prevent the problem of synonyms in the cache. A synonym occurs where two or more virtual addresses map on to the same physical address, so that more than one copy of the same data item may be present in different locations of the cache.
- If the cache hit resulted from a self-snoop operation, the status of the address line of the cache is set to INVALID. Thus, the existing copy of the data item in the cache is invalidated, preventing the occurrence of a synonym.
- If, on the other hand, the cache hit resulted from a read transaction by another processing module, then the status of the addressed line of the cache is set to SHARED, and the shared status line of the bus is asserted, so as to inform the other module that the data in question is also present in the cache in this processing module.
- The operation of the snoop logic is the same for cached page table entries as it is for data entries.
Claims (7)
(a) a data processing unit (20),
(b) a cache memory (21), addressable by means of a virtual. address (VA) from the processing unit, so as to access data items from the cache memory,
(c) a memory management unit (24), for translating virtual addresses from the processing unit into physical addresses (PA), and
(d) a main memory (11), addressable by the physical address from the memory management unit,
characterised in that the physical addresses from the memory management unit can also be used to address the cache memory to allow the memory management unit to access address translation information from the cache memory.
(a) storing data items and address translation table entries in a main memory (11),
(b) operating a memory management unit (24) to translate a virtual address (VA) from a data processing unit (20) into a physical address (PA) for addressing the main memory, and
(c) transferring copies of the data items from the main memory into a cache memory (21), and addressing those copies in the cache memory by means of said virtual address from the data processing unit,
characterised in that copies of the address translation table entries are transferred from the main memory into the cache memory, and those copies in the cache memory are addressed by means of said physical address from the memory management unit.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB8823077 | 1988-09-30 | ||
GB888823077A GB8823077D0 (en) | 1988-09-30 | 1988-09-30 | Data processing apparatus |
Publications (3)
Publication Number | Publication Date |
---|---|
EP0365117A2 true EP0365117A2 (en) | 1990-04-25 |
EP0365117A3 EP0365117A3 (en) | 1991-03-20 |
EP0365117B1 EP0365117B1 (en) | 1996-01-03 |
Family
ID=10644576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP89307983A Expired - Lifetime EP0365117B1 (en) | 1988-09-30 | 1989-08-04 | Data-processing apparatus including a cache memory |
Country Status (6)
Country | Link |
---|---|
US (1) | US5179675A (en) |
EP (1) | EP0365117B1 (en) |
AU (1) | AU612515B2 (en) |
DE (1) | DE68925336T2 (en) |
GB (1) | GB8823077D0 (en) |
ZA (1) | ZA896689B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2682507A1 (en) * | 1991-10-11 | 1993-04-16 | Intel Corp | HIDDEN MEMORY FOR DIGITAL PROCESSOR WITH TRANSLATION OF VIRTUAL ADDRESSES IN REAL ADDRESSES. |
GB2307319A (en) * | 1995-11-17 | 1997-05-21 | Hyundai Electronics Ind | Dual-directory virtual cache |
GB2571539A (en) * | 2018-02-28 | 2019-09-04 | Imagination Tech Ltd | Memory interface |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB8728494D0 (en) * | 1987-12-05 | 1988-01-13 | Int Computers Ltd | Multi-cache data storage system |
US5724549A (en) * | 1992-04-06 | 1998-03-03 | Cyrix Corporation | Cache coherency without bus master arbitration signals |
JPH0667980A (en) * | 1992-05-12 | 1994-03-11 | Unisys Corp | Cache logic system for optimizing access to four- block cache memory and method for preventing double mistakes in access to high-speed cache memory of main frame computer |
US5581704A (en) * | 1993-12-06 | 1996-12-03 | Panasonic Technologies, Inc. | System for maintaining data coherency in cache memory by periodically broadcasting invalidation reports from server to client |
US5895499A (en) * | 1995-07-03 | 1999-04-20 | Sun Microsystems, Inc. | Cross-domain data transfer using deferred page remapping |
US6643765B1 (en) | 1995-08-16 | 2003-11-04 | Microunity Systems Engineering, Inc. | Programmable processor with group floating point operations |
US6101590A (en) | 1995-10-10 | 2000-08-08 | Micro Unity Systems Engineering, Inc. | Virtual memory system with local and global virtual address translation |
US5860025A (en) * | 1996-07-09 | 1999-01-12 | Roberts; David G. | Precharging an output peripheral for a direct memory access operation |
US6427188B1 (en) | 2000-02-09 | 2002-07-30 | Hewlett-Packard Company | Method and system for early tag accesses for lower-level caches in parallel with first-level cache |
US6647464B2 (en) * | 2000-02-18 | 2003-11-11 | Hewlett-Packard Development Company, L.P. | System and method utilizing speculative cache access for improved performance |
US6427189B1 (en) | 2000-02-21 | 2002-07-30 | Hewlett-Packard Company | Multiple issue algorithm with over subscription avoidance feature to get high bandwidth through cache pipeline |
US7085889B2 (en) * | 2002-03-22 | 2006-08-01 | Intel Corporation | Use of a context identifier in a cache memory |
US8572349B2 (en) * | 2006-01-31 | 2013-10-29 | Agere Systems Llc | Processor with programmable configuration of logical-to-physical address translation on a per-client basis |
US10514855B2 (en) * | 2012-12-19 | 2019-12-24 | Hewlett Packard Enterprise Development Lp | NVRAM path selection |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4587610A (en) * | 1984-02-10 | 1986-05-06 | Prime Computer, Inc. | Address translation systems for high speed computer memories |
EP0232526A2 (en) * | 1985-12-19 | 1987-08-19 | Bull HN Information Systems Inc. | Paged virtual cache system |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4386402A (en) * | 1980-09-25 | 1983-05-31 | Bell Telephone Laboratories, Incorporated | Computer with dual vat buffers for accessing a common memory shared by a cache and a processor interrupt stack |
JPS58102381A (en) * | 1981-12-15 | 1983-06-17 | Nec Corp | Buffer memory |
US4714990A (en) * | 1982-09-18 | 1987-12-22 | International Computers Limited | Data storage apparatus |
US4622631B1 (en) * | 1983-12-30 | 1996-04-09 | Recognition Int Inc | Data processing system having a data coherence solution |
US4991081A (en) * | 1984-10-31 | 1991-02-05 | Texas Instruments Incorporated | Cache memory addressable by both physical and virtual addresses |
US4761733A (en) * | 1985-03-11 | 1988-08-02 | Celerity Computing | Direct-execution microprogrammable microprocessor system |
GB8728494D0 (en) * | 1987-12-05 | 1988-01-13 | Int Computers Ltd | Multi-cache data storage system |
JPH07102421B2 (en) * | 1988-07-26 | 1995-11-08 | 日産自動車株式会社 | Composition for casting sand caking |
US5029070A (en) * | 1988-08-25 | 1991-07-02 | Edge Computer Corporation | Coherent cache structures and methods |
JPH0261749A (en) * | 1988-08-29 | 1990-03-01 | Mitsubishi Electric Corp | Data transfer device |
-
1988
- 1988-09-30 GB GB888823077A patent/GB8823077D0/en active Pending
-
1989
- 1989-08-04 DE DE68925336T patent/DE68925336T2/en not_active Expired - Fee Related
- 1989-08-04 EP EP89307983A patent/EP0365117B1/en not_active Expired - Lifetime
- 1989-08-29 US US07/399,969 patent/US5179675A/en not_active Expired - Lifetime
- 1989-08-31 ZA ZA896689A patent/ZA896689B/en unknown
- 1989-09-28 AU AU42304/89A patent/AU612515B2/en not_active Ceased
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4587610A (en) * | 1984-02-10 | 1986-05-06 | Prime Computer, Inc. | Address translation systems for high speed computer memories |
EP0232526A2 (en) * | 1985-12-19 | 1987-08-19 | Bull HN Information Systems Inc. | Paged virtual cache system |
Non-Patent Citations (1)
Title |
---|
COMPUTER DESIGN, vol. 26, no. 14, 1st August 1987, pages 89-94, Littleton, MA, US; W. VAN LOO: "Maximize perfomance by choosing best memory" * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2682507A1 (en) * | 1991-10-11 | 1993-04-16 | Intel Corp | HIDDEN MEMORY FOR DIGITAL PROCESSOR WITH TRANSLATION OF VIRTUAL ADDRESSES IN REAL ADDRESSES. |
US5717898A (en) * | 1991-10-11 | 1998-02-10 | Intel Corporation | Cache coherency mechanism for multiprocessor computer systems |
GB2307319A (en) * | 1995-11-17 | 1997-05-21 | Hyundai Electronics Ind | Dual-directory virtual cache |
GB2307319B (en) * | 1995-11-17 | 2000-05-31 | Hyundai Electronics Ind | Dual-directory virtual cache memory and method for control thereof |
GB2571539A (en) * | 2018-02-28 | 2019-09-04 | Imagination Tech Ltd | Memory interface |
GB2571539B (en) * | 2018-02-28 | 2020-08-19 | Imagination Tech Ltd | Memory interface |
US10936509B2 (en) | 2018-02-28 | 2021-03-02 | Imagination Technologies Limited | Memory interface between physical and virtual address spaces |
US11372777B2 (en) | 2018-02-28 | 2022-06-28 | Imagination Technologies Limited | Memory interface between physical and virtual address spaces |
Also Published As
Publication number | Publication date |
---|---|
GB8823077D0 (en) | 1988-11-09 |
ZA896689B (en) | 1990-06-27 |
AU4230489A (en) | 1990-04-05 |
DE68925336T2 (en) | 1996-08-01 |
US5179675A (en) | 1993-01-12 |
EP0365117A3 (en) | 1991-03-20 |
DE68925336D1 (en) | 1996-02-15 |
EP0365117B1 (en) | 1996-01-03 |
AU612515B2 (en) | 1991-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0347040B1 (en) | Data memory system | |
US5586283A (en) | Method and apparatus for the reduction of tablewalk latencies in a translation look aside buffer | |
US5155824A (en) | System for transferring selected data words between main memory and cache with multiple data words and multiple dirty bits for each address | |
EP0009938B1 (en) | Computing systems having high-speed cache memories | |
EP0320099B1 (en) | Multi-cache data storage system | |
US3723976A (en) | Memory system with logical and real addressing | |
EP0365117B1 (en) | Data-processing apparatus including a cache memory | |
US5379394A (en) | Microprocessor with two groups of internal buses | |
EP0232526A2 (en) | Paged virtual cache system | |
US5809562A (en) | Cache array select logic allowing cache array size to differ from physical page size | |
EP0911737A1 (en) | Cache memory with reduced access time | |
KR100285533B1 (en) | Write-Through Virtual Cache Memory, Alias Addressing, and Cache Flush | |
US5530823A (en) | Hit enhancement circuit for page-table-look-aside-buffer | |
US5590310A (en) | Method and structure for data integrity in a multiple level cache system | |
JP2788836B2 (en) | Digital computer system | |
US5479629A (en) | Method and apparatus for translation request buffer and requestor table for minimizing the number of accesses to the same address | |
US5603008A (en) | Computer system having cache memories with independently validated keys in the TLB | |
US6240487B1 (en) | Integrated cache buffers | |
US7472227B2 (en) | Invalidating multiple address cache entries | |
US6574698B1 (en) | Method and system for accessing a cache memory within a data processing system | |
US5619673A (en) | Virtual access cache protection bits handling method and apparatus | |
EP0535701A1 (en) | Architecture and method for combining static cache memory and dynamic main memory on the same chip (CDRAM) | |
EP0474356A1 (en) | Cache memory and operating method | |
JPH1091521A (en) | Duplex directory virtual cache and its control method | |
EP0395835A2 (en) | Improved cache accessing method and apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): BE DE FR GB IT |
|
PUAL | Search report despatched |
Free format text: ORIGINAL CODE: 0009013 |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): BE DE FR GB IT |
|
17P | Request for examination filed |
Effective date: 19910227 |
|
17Q | First examination report despatched |
Effective date: 19940426 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
ITF | It: translation for a ep patent filed | ||
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): BE DE FR GB IT |
|
REF | Corresponds to: |
Ref document number: 68925336 Country of ref document: DE Date of ref document: 19960215 |
|
ET | Fr: translation filed | ||
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
26N | No opposition filed | ||
REG | Reference to a national code |
Ref country code: GB Ref legal event code: IF02 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: BE Payment date: 20020808 Year of fee payment: 14 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20030715 Year of fee payment: 15 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20030717 Year of fee payment: 15 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20030831 |
|
BERE | Be: lapsed |
Owner name: *INTERNATIONAL COMPUTERS LTD Effective date: 20030831 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050301 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FR Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050429 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: ST |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20050705 Year of fee payment: 17 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20050804 |
|
GBPC | Gb: european patent ceased through non-payment of renewal fee |
Effective date: 20060804 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GB Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20060804 |