An efficient cache memory compression technique to improve system memory performance

Abstract

Nowadays microprocessor technology increases rapidly as defined by Mooreand#8223;s law, states that for every two years Microprocessor technology (number of transistors) doubles. As the technology doubles the memory connected with it must store large amount of data. If the memory is off-chip to the processor, the speed to access or to store data reduces also latency increases. If the memory is on-chip to the processor, the speed is more so that accessing data is faster and delay will be less. So on chip cache memory must be designed in such a way that it should accommodate large amount of data without increasing its area. Cache memory compression and decompression must be employed for the high performance microprocessors in order to access large amount of data without degrading its performance, without increasing its size and without consuming more power. Speed is the challenging issue for any electronic component. Memory access time is dependent on speed of the microprocessor. Access time is more in the off-chip memory than on-chip memory. It takes an order of magnitude more than accessing on chip cache and two orders of magnitude more than executing an instruction. In order to increase the speed, cache memory compression technique is found by microprocessor system designers, as it increases the cache capacity and off-chip bandwidth. Performance of the processor, power consumption and area overheads were assumed in past work on cache compression. Without understanding the cost, itand#8223;s not possible to determine whether compression at levels of the memory hierarchy closest to the processor is beneficial. Only the compression ratio is not the important metric. newline

Description

Keywords

Citation

item.page.endorsement

item.page.review

item.page.supplemented

item.page.referenced