Abstract
Performance of most cache memories, virtual paging systems, TLB’s, and disk caches are analyzed using tracedriven simulations. These require large amounts of storage for the traces. In this paper we present a paging based trace compression mechanism which is loss less and improves upon the mache method of Samples [5], up to a factor of two. The key idea is to split up a trace of main memory references into two levels. The top level is the page reference stream and the lower is the string of offset references for each of the pages. Then we compress the two levels separately and obtain the final compaction. In addition, unlike the monolithic compression of mache, this method provides random access to individual page traces.