全部博文(51)
分类:
2011-03-30 21:51:06
In some recent discussions, I sensed there is some confusion around solid state device (SSD) storage used as a storage tier vs, a cache. While there are some similarities and both are intended to achieve the same end result i.e. acceleration of data accesses from slower storage, there are some definite differences which I thought I’d try to clarify here. This is from my working viewpoint here, so please do post comments if you feel differently.
Firstly, SSD caching is temporary storage of data in an SSD cache whereas true data tiering is classed as a semi-permanent movement of data to or from an SSD storage tier. Both are based on algorithms or policies that ultimately result in data being copied to, or removed from, SSDs. To clarify further, if you were to unplug or remove your SSDs, for the caching case, the user data is still stored in the primary storage behind the SSD cache and is still from the original source (albeit slower) whereas in a data tier environment, the user data (and capacity) is no longer available if the SSD tier were removed as the data was physically moved to the SSDs and most likely removed from the original storage tier.
Another subtle difference between caching and teiring is if the SSD capacity is visible or not. In a cached mode, the SSD capacity is totally invisible i.e. the end application simply sees the data accessed much faster if it has been previously accessed and is still in cache store (i.e. a cache hit). So if a 100G SSD cache exists in a system with say 4TB of hard disk drive (HDD) storage, the total capacity is still only 4TB i.e. that of the hard disk array, with 100% of the data always existing on the 4TB with copies only of the data in the SSD cache based on the caching algorithm used. In a true data tiering setup using SSDs, the total storage is 4.1TB and though this may be presented to a host computer as one large virtual storage device, part of the data exists on the SSD and the remainder on the hard disk storage. Typically, such small amounts of SSD would not be implemented as a dedicated tier, but you get the idea if say 1TB of SSD storage was being used in a storage area network system of 400TB of hard drive based storage creating 401TB of usable capacity.
So how does data make it into a cache versus a tier? Cache and block level automated data tiering controllers monitor and operate on statistics gathered from the stream of storage commands and in particular the addresses of the storage blocks being accessed.
SSD Caching SimplifiedCaching models typically employ a lookup table method based on the block level address (or range of blocks) to establish if the data the host is requesting has been accessed before and potentially exists in the SSD cache. Data is typically moved more quickly into an SSD cache versus say tiering where more analysis of the longer term trend is typically employed which can span hours if not days in some cases. Unlike DRAM based caches however where it is possible to cache all reads, a little more care and time is taken with SSDs to ensure that excessive writing to the cache is avoided given the finite number of writes an SSD can tolerate. Most engines use some form of “hot-spot” detection algorithm to identify frequently accessed regions of storage and move data into the cache area once it has been established there is a definite frequent access trend.
Traditional caching involves one of several classic caching algorithms which result in either read-only or read and write caching. Cache algorithms and approaches vary by vendor and dictate how a read from the HDD storage results in a copy of the original data entering the cache table and how long it ”lives” in the cache itself. Subsequent reads to that same data who’s original location was on the hard drive can now be sent from the SSD cache instead of the slower HDD i.e. a cache hit (determined using a address lookup in the cache tables). If this is the first time data is being accessed from a specific location on the hard drive(s), then the data must first be accessed from the slower drives and a copy made in the SSD cache if the hot spot checking algorithms deems necessary (triggered by the cache miss).
Caching algorithms often try to use more sophisticated models to pre-fetch data based on a trend and store it in the cache if it thinks there a high probability it may be accessed soon e.g. in sequential video streaming or VMware virtual machine migrations where it is beneficial to cache data from the next sequential addresses and pull them into the cache at the same time as the initial access. After some period of time or when new data needs to displace older or stale data in the cache, a cache flush cleans out the old data. This may also be triggered by the hot spot detection logic determining that the data is now “cold”.
The measure of a good cache is how many hits it gets versus misses. If data is very random and scattered over the entire addressable range of storage with infrequent accesses back to the same locations, then the effectiveness is significantly lower and sometimes detrimental to overall performance as there is an overhead in attempting to locate data in the cache on every data access.
SSD Auto Tiering BasicsAn automated data tiering controller treats the SSD and HDDs as two separate physical islands of storage, even if presented to the host application (and hence the user) as one large contiguous storage pool (a virtual disk). A statistics gathering or scanning engine collects data over time and looks for data access patterns and trends that match a pre-defined set of policies or conditions. These engines use a mix of algorithms and rules that indicate how and when a particular block (or group of blocks) of storage is to be migrated or moved.
The simplest “caching like” approach used by a data tiering controller is based on frequency of access. For example, it may monitor data blocks being accessed from the hard drives and if it passes a pre-defined number of accesses per hour “N” for a period of time “T”, then a rule may be employed that says when N>1000 AND T>60 minutes, move the data up to the next logical tier. So if data being accessed a lot from the hard drive and there are only two tiered defined, SSD being the faster of the two, the data will be copied to the SSD tier (i.e. promoted) and the virtual address map that converts real time host addresses to the physical updated to point data to the new location in SSD storage. All of this of course happens behind a virtual interface to the host itself who has no idea the storage just moved to a new physical location. Depending on the tiering algorithm and vendor, the data may be discarded on the old tier to free up capacity. The converse is also true. If data is infrequently accessed and lives on the SSD tier, it may be demoted to the HDD tier based on similar algorithms.
More sophisticated tiering models exist of course, some that work at file layers and look at the specific data or file metadata to make more intelligent decisions about what to do with data.
Where is SSD Caching or Tiering Applied?Typically, SSD caching is implemented as a single SATA or PCIe flash storage device along with an operating system driver layer software in a direct attached storage (DAS) environment to speed up Windows or other operating system accesses. In much larger data center storage area networks (SAN) and cloud server-storage environments, there are an increasing number of dedicated rackmount SSD storage units that can act as a transparent cache at LUN level where the caching is all done in the storage area network layer, again invisible to the host computer. The benefit of cache based systems are that they can be added transparently and often non-disruptively (other than the initial install). Unlike with tiering, there is no need to setup dedicated pools or tiers of storage i.e. they can be overlaid on top of an existing storage setup.
Tiering is more often found in larger storage area network based environments with several disk array and storage appliance vendors offering the capability to tier between different disk arrays based on their media type or configuration. Larger tiered systems often also use other backup storage media such as tape or virtual tape systems. Automated tiering can substantially reduce the management overhead associated with backup and archival of large amounts of data by fully automating the movement process, or helping meet data accessibility requirements of government regulations. In many cases, it is possible to tier data transparently between different media types within the same physical disk array e.g. a few SSD drives in RAID 1 or 10, 4-6 SAS drives in a RAID 10 and 6-12 SATA drives in a RAID i.e. 3 distinct tiers of storage. Distributed or virtualizaed storage environments also offer either manual or automated tiering mechanisms that work within their proprietary environments. At the other end of the spectrum, file volume manager and storage virtualization solutions running on the host or in a dedicated appliance can allow IT managers to organize existing disk array devices of different types and vendors and sort them into tiers. This is typically a process that requires a reasonable amount of planning and often disruption, but can yield tremendous benefits once deployed.