Cache memory organization pdf

This article cache memory organization pdf about the computing optimization concept. This article needs additional citations for verification.


Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.

A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. To be cost-effective and to enable efficient use of data, caches must be relatively small. Nevertheless, caches have proven themselves in many areas of computing because access patterns in typical computer applications exhibit the locality of reference.

Moreover, access patterns exhibit temporal locality if data is requested again that has been recently requested already, while spatial locality refers to requests for data physically stored close to data that has been already requested. 4 GHz processor to reach DRAM. This is mitigated by reading in large chunks, in the hope that subsequent reads will be from nearby locations.

The use of a cache also allows for higher throughput from the underlying resource, by assembling multiple fine grain transfers into larger, more efficient requests. In the case of DRAM, this might be served by a wider bus. Reading larger chunks reduces the fraction of bandwidth required for transmitting address information.

Hardware implements cache as a block of memory for temporary storage of data likely to be used again. A cache is made up of a pool of entries. Each entry has associated data, which is a copy of the same data in some backing store.

Each entry also has a tag, which specifies the identity of the data in the backing store of which the entry is a copy. If an entry can be found with a tag matching that of the desired data, the data in the entry is used instead. This situation is known as a cache hit.