Chinaunix首页 | 论坛 | 博客
  • 博客访问: 7682948
  • 博文数量: 637
  • 博客积分: 10265
  • 博客等级: 上将
  • 技术积分: 6165
  • 用 户 组: 普通用户
  • 注册时间: 2004-12-12 22:00
文章分类

全部博文(637)

文章存档

2011年(1)

2010年(1)

2009年(3)

2008年(12)

2007年(44)

2006年(156)

2005年(419)

2004年(1)

分类: LINUX

2005-10-14 11:45:47

QuickPar is a utility for creating Parity Volumes using the Reed Solomon algorithm. For details of the algorithm used, see the website at SourceForge.

Parity Volumes may be used to verify that a set of files have not been corrupted, or to reconstruct damaged files (providing that you have a sufficient quantity of Parity Volumes to match the missing or damaged files).

QuickPar uses the PAR version 2.0 specification. You can read about the differences between PAR version 1.0 and PAR version 2.0 .

What is the difference between the PAR 1.0 specification and the PAR 2.0 specification?

PAR 1.0 has a number of limitations:

  • Damaged files cannot be repaired, they must be fully reconstructed instead. This means that a single-byte error in a 10MB file would require the use of one whole PAR file to reconstruct the damaged file.
  • All of the PAR files are of equal size and contain enough recovery data to reconstruct the largest source file. This means that if you have source files of varied sizes and the smallest one is damaged, then you still need a whole PAR file to reconstruct it. When PAR is used on UseNet, this could mean that you have to download a 10MB PAR file to reconstruct a 3MB data file.
  • Damaged PAR files are of no use during reconstruction. A single byte error to a PAR file renders all of the recovery data it contains useless.
  • When used with small numbers of source files, it is very inefficient and you need to create an excessive number of PAR files to achieve a desired level of protection. For this reason, files are normally split into many equal sized pieces and PAR files generated from those pieces.
  • It cannot handle more than 255 files.

PAR 2.0 either completely removes or significantly reduces these limitations:

  • Damaged files can be repaired. A single byte error in a 10MB file might only requires the use of recovery data from a PAR file that is only 100KB in size.
  • There is no relationship between the size of the data files and the size of the PAR files. Also, the PAR files will normally be of varied sizes allowing you to pick the size you need appropriate to the amount of damaged data you need to repair.
  • Damaged PAR files will still be useable. PAR 2.0 can use the undamaged parts of a PAR file.
  • PAR files can be generated from a single source file without the need to split it. On UseNet, this removes the need to use RAR or any other file splitter. Please note however, that due to the limitations of some newsreaders (which do not permit the download of incomplete files), it is advisable to use RAR to split very large files.
  • It can handle up to 32768 files.

So how exactly does PAR 2.0 remove all of these limitations?

The limitations of PAR 1.0 are all due to the fact that it operates on "whole" files.

PAR 2.0 operates by "virtually" splitting the files you wish to protect into many smaller "slices" (or blocks) of data. PAR 2.0 then processes these virtual slices in the same way that PAR 1.0 would process whole files. The resulting blocks of recovery data are the same size of these slices, and for convenience, many of them will be placed in a single PAR file.


How does using a PAR 2.0 program differ from a PAR 1.0 program?

A PAR 2.0 program will work equally well with a single 800MB file or with hundreds of files of varied sizes. It is not necessary either to split large files, or to archive large numbers of small files. This makes the files directly useable without the need to rejoin them or unpack them first. This means that it is possible place PAR files on an SVCD to protect a video.

With PAR 1.0, the only option you needed to specify when creating PAR files is how much recovery data you wish to create.

With PAR 2.0, you must also specify how many slices the files will be virtually divided into. Using a larger number of smaller slices allows more accurate detection of errors in files and reduces the amount or recovery data required to achieve a succesful repair.

阅读(1405) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~