Chinaunix首页 | 论坛 | 博客
  • 博客访问: 363559
  • 博文数量: 38
  • 博客积分: 256
  • 博客等级: 入伍新兵
  • 技术积分: 846
  • 用 户 组: 普通用户
  • 注册时间: 2012-12-14 23:21
文章分类

全部博文(38)

文章存档

2015年(1)

2014年(1)

2013年(28)

2012年(8)

我的朋友

分类: 服务器与存储

2013-12-26 12:57:41

Many All-Flash Storage Array vendors are counting on deduplication to bring the cost per GB of their systems more in line with traditional hard disk based storage systems. However IT planners need to be careful not to assume that all deduplication technologies are the same. They are not. And the differences have a significant impact on storage performance, efficiency, effective cost and data safety. 



How Dedupe Works


Deduplication counts on redundant data segments to deliver its efficiencies. As data is written to the storage system it is segmented and then those segments are examined for uniqueness. If the data segment is unique it is written to storage and the deduplication engine's table is updated with a unique hash key associated with that segment. If the segment is not unique then the data is not written to disk, instead a pointer is created that links to the original data segment. 


Each additional entry in the deduplication key table makes the table larger. In other words, the more duplicate data, the larger this key tracking table becomes. The larger the table the longer it may take to search that table to confirm a unique data segment. Also, the larger the table becomes the deduplication technology needs to make sure to take steps to protect the table from corruption.  


The size of the deduplication table was less of an issue in deduplication’s first use case- backup. This is because the backup is highly redundant, which means the hash key table stays relatively small.  In the production data use case, which obviously All-Flash Arrays would participate in, it is a different story. The level of redundancy is much lower than in backup data and as a result production systems with deduplication have to deal with much larger tables. 


In database environments, the level of reduction can be as low as 3X. Even in virtual desktop and server environments, where there is a higher level of data redundancy, the effective efficiency for deduplication is about 9X, but  reports it can be as high as 25X. 


Make no mistake, both of these efficiency levels are high enough to make the investment in deduplication worth it, especially in All-Flash storage systems where every gigabyte of capacity reclaimed saves more dollars than the equivalent hard drive system. But this investment in deduplication has to be better thought out than it was in the backup use case. IT planners and storage designers should consider the following factors when selecting an All-Flash system with deduplication.



Small Deduplication Table Size


The size of the deduplication key table, also known as the hash table, is critical because of lookups to confirm data uniqueness or not. These lookups need to happen at real-time speeds so that application performance is not impacted. This typically means that the table needs to be stored in DRAM for the fastest possible response. All storage systems have DRAM, but each GB of it adds to the cost. If the deduplication table can fit in DRAM that typically comes with the system, then fast deduplication can happen without added costs. 


To fit in DRAM though, the deduplication table has to be as small as possible. And this is an area where some deduplication solutions fall short. For example, ZFS's deduplication can require up to 64GBs of RAM to store just 1 TB of data. SDFS, also known as OpenDedupe, improves on this but still requires 10 GB of RAM per TB of data. 


Compare these results to Permabit's Albireo solution available to storage integrators and designers. It only requires 0.1 GB of RAM to store 1TB of data. This means a much lower RAM requirement in the storage system for OEM manufacturers and a lower system cost for users.  In addition, the smaller RAM requirement / TB also enables larger scalability of the storage system driving down effective cost compared to HDD. The smaller deduplication table also means a much more stable environment, less exposed to corruption or failure. 



Efficient Use of CPU Resources


Another critical component in storage system design is the type and number of CPUs. Each CPU and the sophistication of that CPU (number of cores) adds to the expense of the storage system. Similar to DRAM, if the deduplication engine can execute efficiently in the same amount of CPU that the storage system already comes with then, again deduplication can be added without additional hardware cost. 


Obviously the storage system has more to do than simply track the uniqueness or not of data. Features like RAID, LUN and volume management, thin provisioning, snapshots, replication and data tiering all require CPU processing power. So it is important that the deduplication engine be efficient in its use of the CPU resource so it can support these other functions. 


ZFS and OpenDedupe require up to four CPU cores to achieve acceptable performance that won't impact the user experience. For some systems, this could mean the expense of an additional CPU or two plus extra power supplies and fans to power and cool the system. Also, the more CPUs required per motherboard, the more expensive those motherboards will be. 


Permabit on the other hand is able to provide better performance in only 2 CPU cores. This is something that Storage Switzerland’s initial ; we will also be testing it again next month in our subsequent lab analysis. In our undersized Linux NFS server, CPU utilization has been negligible. 



Inline Dedupe Required


Another key for success in All-Flash Arrays is for deduplication to be done inline before data is written to the storage media. This means that data is segmented, examined for uniqueness, and a decision on that uniqueness is made all prior to any write or pointer update is committed. This is a lot of work to happen real-time and it has to happen fast enough so that the process does not show a noticeable performance impact. It emphasizes the importance of efficient use of RAM and CPU resources. 


The reason that inline deduplication should be considered a requirement on All-Flash systems is that the deduplication process can then not only reduce the capacity requirement instantly, it also prolongs the life of the flash NAND because it eliminates unneeded writes before they occur. Write endurance is always a concern of flash based technology, eliminating the write before it ever happens is an excellent way to ease that concern! 


Another deduplication method is post-process deduplication. This technique does all the above examination and action during less busy times, like overnight hours. While it eliminates the performance concern, post-process deduplication is unacceptable in All-Flash and even Hybrid Array environments. This is because not only does it create a temporary storage area that the storage manager needs to factor into their cost analysis and be concerned about, it can actually triple the number of writes to the flash storage area. 


First the write, redundant or not does occur because post process deduplication looks for redundancy during less busy times. This includes RAID parity bits.  Then when the post-process deduplication task runs and identifies redundant data segments, it erases those segments and establishes a pointer. The problem is that with flash, the only way to erase data is to write to the cell; typically writing a series of zeros to the cell does this. This also means that the parity bit is removed, being erased the same way. In other words, post-process deduplication is used to hide poorly performing deduplication engines. 



Conclusion


All deduplication is not created equal and the near zero latency realty of All-Flash or Hybrid arrays expose those differences more than a disk based storage system. The impact for storage system designers is high cost systems that don't perform well at scale, which of course leads to unhappy or lost customers. 


From an IT planner’s perspective, the ultimate burden of a poor performing deduplication engine lands on them. Similar to how "Intel Inside" became important, so should the understanding of how scalable and resource efficient the deduplication is inside.

阅读(2339) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~