Why using cache memory? What is the typical size of a cache memory page? Order from slowest to fastest: L1, L2, L3 cache. Order from smallest to biggest: L1, L2, L3 cache. Does the core stall if producing a read-miss? Does the core stall if producing a write-miss? Why is a write buffer necessary? Given a RAM with 64 (cache page size) bytes *32 memory pages and a cache with 8 memory pages. Explain the addressing with direct mapped cache. What are tags (direct mapped cache) and why do we need them?
Expert Answer
1) A cache memory provides better performance. Now whenever a program runs it needs to fetch the data or fetch the instructions, now the data needs to be loaded from main memory and it travels between CPU and DRAM via memory controller and CPU better performance depends on how fast this data travels. So in order to make this travel fast cache is used which stored the data for future reference and provides faster access when needed next time.
2) It actually depends on size of the data store (the memory elements actually stored) but typically it is either 256 KB or 512 KB.
3) The order from slowest to fastest is L3->L2->L1 (L3 being slowest).
4) The order from smallest to biggest is L1->L2->L3 (L1 being smallest).
5) Yes the Core stall while producing cache read miss because it keeps on waiting as without reading the instruction or data it can’t proceed further to work.
6) No the Core doesn’t stall usually while producing cache write miss as it can be kept in the queue while waiting to be freed up and move on to next task.
7) This is used to hold data which is being written to main memory or to the next cache. And doing so it frees the cache in order to fulfill read request. This process becomes useful where we have very slow main memory where subsequent reads can be done without waiting for long memory latency.
8) In direct mapped cache address is divided into tag, index and offset where tag is used to find block inside the cache.