Institutional Repository

Further cache and TLB investigation of the RAMpage memory hierarchy

Show simple item record

dc.contributor.author Machanick, P
dc.contributor.author Patel, Z
dc.contributor.editor Renaud, K.
dc.contributor.editor Kotze, P
dc.contributor.editor Barnard, A
dc.date.accessioned 2018-08-23T10:22:05Z
dc.date.available 2018-08-23T10:22:05Z
dc.date.issued 2001
dc.identifier.citation Machanick, P. & Patel, Z. (2001) Further cache and TLB investigation of the RAMpage memory hierarchy. Hardware, Software and Peopleware: Proceedings of the Annual Conference of the South African Institute of Computer Scientists and Information Technologists, University of South Africa, Pretoria, 25-28 September 2001 en
dc.identifier.isbn 1-86888-195-4
dc.identifier.uri http://hdl.handle.net/10500/24765
dc.description.abstract The RAMpage memory hierarchy is an alternative to the traditional division between cache and main memory: main memory is moved up a level and DRAM is used as a paging device. Earlier RAMpage work has shown that the RAMpage model scales up better with the growing CPU-DRAM speed gap, especially when context switches are taken on misses. This paper investigates the effect of more aggressive first-level (L l) cache and translation lookaside buffer (TLB) implementations, with other parameters kept the same as in previous work, to illustrate that a more aggressive design improves the competitiveness of RAMpage. The more aggressive LI shows an increase in the advantage of RAMpage with context switches on misses, supporting the hypothesis that a more aggressive LI favours RAMpage. However, results without context switches on misses are less conclusive. A larger TLB, as predicted, makes RAMpage viable over a wider range of page sizes. en
dc.language.iso en en
dc.title Further cache and TLB investigation of the RAMpage memory hierarchy en


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search UnisaIR


Browse

My Account

Statistics