Chinaunix首页 | 论坛 | 博客
  • 博客访问: 379477
  • 博文数量: 38
  • 博客积分: 256
  • 博客等级: 入伍新兵
  • 技术积分: 846
  • 用 户 组: 普通用户
  • 注册时间: 2012-12-14 23:21
文章分类

全部博文(38)

文章存档

2015年(1)

2014年(1)

2013年(28)

2012年(8)

我的朋友

分类: 服务器与存储

2013-05-17 14:20:48

 is a commonly used method for improving processor performance by reducing the access time to memory.  Typically a CPU can have both an instruction cache to prefetch processor instructions and a data cache to speed up access to a requested piece of data.  Here, I am focusing on data cache when I talk about CPU cache.
Storage cache is a term used here to describe a cache memory that is designed to speed up access to data stored in primary (disk) storage. This type of cache is not part of the CPU or motherboard design, but rather resides between the CPU and the primary storage system.
In this blog, I will discuss the similarities and differences of CPU cache and storage cache.

Locality Principle

Both CPU cache and storage cache work based on the . Simply stated, this means that data locations used by a application are often grouped in clusters (e.g. for 90% of the time, a program uses 10% of the data set). This leads to the concept of caching - using special high performance memory to store the most frequently/recently used data. A high percentage of accessing the cached data – a “cache hit”, means that the caching algorithm is effective at selecting the correct data to store in cache. However, when an application requires data to be written to primary storage, the data locality is not as strong and as a result, storage cache typically has lower hit rate than CPU cache.

Cache Policies

A cache placement policy decides where in cache the hot data will go and how it will be associated with the main memory copy of the data. For CPU cache there are many types of placement strategies including direct mapped, set-associative mapped, and fully associative mapped. The performance of these strategies is directly related to their complexity (i.e. high performance requires high complexity). Another design issue associated with any cache is how to decide what data to evict from cache when it is full. Again, this is a complex problem with many different solutions. A cache write policy determines what to do on a write operation. Write-through policy updates both cache and backing store to guarantee data coherence. Write-back policy, on the other hands, updates cache only and delays the write on backing store to a later time for better cache performance.

For storage cache, the issues are similar but have different impact.   Cache placement is not that critical. The hardware is slower, so the CPU has time to process data to find the best location. Fully associative mapping can be used in most cases which provides the highest cache hit rates at the cost of complexity. The two most important design issues with storage cache are:
1) cache replacement algorithm – deciding what data to cache and what data to evict when the cache is full;
2) cache write policy -- write-through or write-back.

Using SSDs As Storage Cache

Recently SSDs have become more cost effective and are being used in conjunction with SSD caching software to operate as storage cache. In this case, the differences between SSD based storage cache and CPU cache are much more critical to understand. Since applications assume data written to primary storage will be there forever and storage cache is being used as an intermediary for primary storage, SSD based storage cache must have the same persistent properties as primary storage.  

CPU cache is usually made from DRAM while SSD storage cache is made using NAND-flash.  DRAM and NAND-flash have completely different properties when it comes to read/write operations and physical wear issues. Actually, there are no concerns with DRAM regarding read and write operations since DRAM performance is nearly identical in read/write operation.  Also, there are no concerns for DRAM wearing due to excessive write operations. Therefore the CPU caching algorithm can be developed without any concern for both asymmetric read/write performance and excessive write commands.

Contrary to DRAM, NAND-flash based SSD server cache is asymmetric – write operations can take much longer than read operations. Further, due to the block structure of the flash devices, a single write command can trigger multiple write operations within the SSD as data is moved internally to clear space for block erase operations. This phenomenon, called , is a major concern for the SSD caching algorithm. Finally, the NAND memory devices within the SSD server cache has limited lifetime of erase/write cycles so the SSD server caching algorithm must minimize the write operations to the SSD to insure longevity and reliability.

Caching Algorithms Must Be Device Specific

While there are many different ways to implement storage cache, SSDs seem to be emerging as one of the highest cost-effective solutions. Due to the unique physical properties of NAND-flash, SSD based server cache must be managed to maximize both cache hits and minimize erase/write cycles (maximize device lifetime). These seemingly opposed requirements are what makes successful SSD caching algorithms so challenging to develop. It is imperative to match the caching algorithm with the properties of the device used for the server cache.

阅读(1713) | 评论(0) | 转发(0) |
0

上一篇:Page Cache

下一篇:How to improve ZFS performance

给主人留下些什么吧!~~