Chinaunix首页 | 论坛 | 博客
  • 博客访问: 661767
  • 博文数量: 291
  • 博客积分: 10025
  • 博客等级: 上将
  • 技术积分: 2400
  • 用 户 组: 普通用户
  • 注册时间: 2004-12-04 12:04
文章分类

全部博文(291)

文章存档

2008年(102)

2007年(112)

2006年(75)

2004年(2)

我的朋友

分类: 服务器与存储

2008-01-09 11:34:50

Top 10 Questions (and Answers) People Ask About Data De-duplication

1. What does the term "data de-duplication" really mean?
There's really no industry-standard definition yet, but we're getting close. Everybody agrees that it's a system for eliminating the need to store redundant data, and most people limit it to systems that look for duplicate data at a block not a file level. That's an important feature. Imagine 20 copies of a presentation that have different title pages–to a file-level data reduction system they look like 20 completely different files. Block level approaches would see the commonality between them and use much less storage.

The most powerful data de-duplication uses a variable-length block approach. Products using this approach look at a sequence of data, segment it into variable length blocks, and when they see a repeated block, they store a pointer to the original instead of storing the block again. Since the pointer takes up less space than the block, you save space. In backup, where the same blocks show up over and over, users can typically store 10 to 50 times more data than on conventional disk.

2. How can data de-duplication be applied to replication?
Replication is the process of sending duplicate data from a source to a target. If you replicate all the backup data then you need a relatively high performance network to get the job done. But with de-duplication, the source system–the one sending data–looks for duplicate blocks in the replication stream. If it has already transmitted a block to the target system, then it doesn't have to transmit it again–it simply sends a pointer. Since the pointer is much smaller than the block, we need much lower bandwidth networks for replication.

3. What applications does data de-duplication work with? Are there any that it doesn't work with?
When it's being used for backup, it supports all applications–email, databases, print and file applications, etc–and all qualified backup packages. Variable block length de-duplication can find redundant blocks in the backup stream for all of them. Certain file types–some rich media files, for example–don't see much advantage the first time they are sent through de-duplication because the applications that write the files already eliminate redundancy. But if those files are backed up multiple times or backed up after small changes are made, de-duplication can have very powerful capacity advantages.

4. Is there any way to tell how much de-duplication advantage I will get with my data?
There are really four primary variables. How much the data changes (that is, how many new blocks get introduced), how well it can compress, what your backup methodology is (full vs. incremental, for example), and how long you plan to retain the data. Some vendors–Quantum is one–offer sizing calculators to estimate the effects.

5. What is the real benefit of using data de-duplication?
There are really two. 1) Data de-duplication technology lets you keep more backup data on disk than with any conventional disk backup system–which means you can restore more data faster. 2) It makes it practical to use standard WANs and replication for DR protection–which means users can reduce their tape handling.

6. What is variable-block length data de-duplication? How do you get variable-length blocks and why would I want them?
It's easiest to think of the alternative. If you divided a stream of data into fixed-length segments, every time something changed at one point, all the blocks downstream would also change. The system of variable-length blocks allows some of the segments to stretch or shrink, while leaving downstream blocks unchanged–this increases the ability of the system to find duplicate data segments, so it saves significantly more space.

7. If the data is divided into blocks, is it safe? How can it be restored?
The technology for using pointers to reference a sequence of data segments has been standard in the industry for decades, you use it every day, and it is safe. Whenever you write a large file to disk, it is stored in blocks on different disk sectors in an order determined by space availability. When you "read" a file, you are really reading pointers in file's metadata which point to the various sectors in the right order. Block-based data de-duplication applies a similar kind of technology. And de-duplication vendors typically build in a variety of data integrity checks to verify that the system is sound and the data remains available.

8. Where does data de-duplication take place during the backup process?
There are really two choices. You can send all your backup data to a backup target and perform de-duplication there, or you can perform the de-duplication on the host during backup. Both systems are available and both have advantages. If you de-duplicate on the host during backup, you send less data over your backup connection, but you have to manage software on all the protected hosts, backup slows down because de-duplication adds overhead, and it can slow down other applications running on the host server. If you de-duplicate at the backup target you send more data over the connection, but you can use any backup software, you only have to manage a single target, and the performance is normally much higher because the hardware system is specially built just for de-duplication.

9. Can de-duplication technology be used with tape?
No and yes. Data de-duplication needs random access to data blocks for both writing and reading, so it needs to be implemented in a disk based system. But tape can easily be written from a de-duplication data store and in fact that is the norm. Most de-duplication customers plan on keeping a few weeks or months of backup data on disk, and then use tape for longer term storage. When you create a tape from de-duplicated data, the data is re-expanded so that it can be read directly in a tape drive and will not have to be written back to a disk system first.

10. What do data de-duplication solutions really cost?
There's a lot of variability, but there is a pretty good rule of thumb starting point. Assuming an average de-duplication advantage of 20:1–that's a number widely used in the industry–we have seen list prices in the range of $1/GB. So a system that could retain 20TB of backup data would have a list price of around $20,000–that's much lower than if you protected the same data using conventional disk. A note: options could increase that price–and discounts from resellers or vendors could reduce it.
阅读(584) | 评论(0) | 转发(0) |
0

上一篇:成都文殊院

下一篇:LTO Technology

给主人留下些什么吧!~~