Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1450146
  • 博文数量: 239
  • 博客积分: 5909
  • 博客等级: 大校
  • 技术积分: 2715
  • 用 户 组: 普通用户
  • 注册时间: 2010-07-24 20:19
文章分类

全部博文(239)

文章存档

2014年(4)

2013年(22)

2012年(140)

2011年(14)

2010年(59)

我的朋友

分类: LINUX

2012-02-10 14:17:01

Value too large for defined data type

It means that your version of the utilities were not compiled with large file support enabled. The GNU utilities do support large files if they are compiled to do so. You may want to compile them again and make sure that large file support is enabled. This support is automatically configured by autoconf on most systems. But it is possible that on your particular system it could not determine how to do that and therefore autoconf concluded that your system did not support large files.

The message "Value too large for defined data type" is a system error message reported when an operation on a large file is attempted using a non-large file data type. Large files are defined as anything larger than a signed 32-bit integer, or stated differently, larger than 2GB.

Many system calls that deal with files return values in a "long int" data type. On 32-bit hardware a long int is 32-bits and therefore this imposes a 2GB limit on the size of files. When this was invented that was HUGE and it was hard to conceive of needing anything that large. Time has passed and files can be much larger today. On native 64-bit systems the file size limit is usually 2GB * 2GB. Which we will again think is huge.

On a 32-bit system with a 32-bit "long int" you find that you can't make it any bigger and also maintain compatibility with previous programs. Changing that would break many things! But many systems make it possible to switch into a new program mode which rewrites all of the file operations into a 64-bit program model. Instead of "long" they use a new data type called "off_t" which is constructed to be 64-bits in size. Program source code must be written to use the off_t data type instead of the long data type. This is typically done by defining -D_FILE_OFFSET_BITS=64 or some such. It is system dependent. Once done and once switched into this new mode most programs will support large files just fine.

See the next question if you have inadvertently created a large file and now need some way to deal with it.

How to remove it.

I created a file with tar cvf backup.tar. Trying to "rm" this file is not possible. The error message is:

rm: cannot remove `backup.tar': Value too large for defined data

What could I do to remove that file ?

Sometimes one utility such as tar will be compiled with large file support while another utility like rm will be compiled without. It happens. Which means you might find yourself with a large file created by one utility but unable to work with it with another.

At this point we need to be clever. Find a utility that can operate on a large file and use it to truncate the file. Here are several examples of how to work around this problem. Of course in a perfect world you would recompile the utilities to support large files and not worry about needing a workaround.

This example again requires perl to be configured for large files.

perl -e 'unlink("backup.tar");'

So let's try to hit it more directly. Truncate the file first. That will make it small and then you can remove it. The shell will do this when redirecting the output of commands.

true > backup.tar rm backup.tar

However, if your shell was not compiled for large files then the redirection will fail. In that case we have to resort to more subtle methods. Since tar created the file then tar must be configured to support large files. Use that to your advantage to truncate the file.

tar cvf backup.tar /dev/null
阅读(3619) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~