Direct Mapped and N-Way set associative cache Simulator in C/C++ for L1 cache in Processors
Use the make
command in linux terminal to get your binaary
Level 1 Data Cache Simulator should accept the following command-line options:
-
-s < split > or < unified >.
-
-c < capacity > with < capacity > in KB: 4, 8, 16, 32, or 64.
-
-b < blocksize > with < blocksize > in bytes: 4, 8, 16, 32, 64, 128, 256, or 512.
-
-a < associativity > where < associativity > is integer size of set: 1, 2, 4, 8 or 16.
for example
Unified: $ ./Cache -c8 -b16 -a4 < cc.trace > output.txt
Split : $ ./Cache -c8 -b16 -a4 -s < cc.trace > output.txt
-s option specifies a split cache. This option indicates that the L1 cache is split equally into L1D (Data Cache) and L1I (Instruction Cache). The -c option gives the combined size of the L1 cache, split equally between L1D and L1I. The block size and associativity is the same for both L1D and L1I. If the -s option is not given then the cache is unif ied by default, as before (i.e. instruction reads are also treated as data reads).
The following functionality is added to handle data write hits and misses with optional command-line options:
--wbwa
Write Back / Write Allocate [Default]--wbwn
Write Back / Write No-Allocate--wtwa
Write Through / Write Allocate--wtwn
Write Through / Write No-Allocate
- The input to the cache simulator is a sequence of memory access traces, one per line, terminated by end of file. In the following format, with a leading 0 for data loads, 1 for data stores and 2 for instruction load.
0 <address>
1 <address> <dataword>
2 <address>
The Program was developed in such a way that there were no difference between Direct mapped and and N-Way set associative cache as per code. The memory was unstructured and linear. However the cache was structured in such a way that the The class cache was set-up as per the associativity with the array size of 2^Index-bits. And each cache set had the page replacement policy implemented within which had the associativity number of cache-lines object as a single cache line which further contained the basic bits , tags and data like validity, Dirty. The data was further divided into block-size (in words) number of words where 1 word was equal to 4 bytes. The policies implemented for page replacement was LRU and was mapped in cache set as K-matrix LRU algorithm. Figure below shows the bird's eye view of the cache structure.
GNU GENERAL PUBLIC LICENSE