全部博文(1144)
分类: LINUX
2006-03-17 11:48:02
Frequently Asked Questions
|
|
If you get an error like one of these:
rsync: error writing 4 unbuffered bytes - exiting: Broken pipe rsync error: error in rsync protocol data stream (code 12) at io.c(463)
or
rsync: connection unexpectedly closed (24 bytes read so far) rsync error: error in rsync protocol data stream (code 12) at io.c(342)
please read the for details on how you can try to figure out what is going wrong.
Some people occasionally report that rsync copies every file when they expect it to copy only a small subset. In most cases the explanation is that you forgot to include the --times (-t) option in the original copy, so rsync is forced to (efficiently) transfer every file to see if it has changed (because the modified time and size do not match).
Another common cause involves sending files to an Microsoft filesystem: if the file's modified time is an odd value but the receiving filesystem can only even values, then rsync will re-transfer too many files. You can avoid this by specifying the --modify-window=1 option.
If you think that rsync is erroneously copying every file then look at the stats produced with -v and see if rsync is really sending all the data. See also the --checksum (-c) option for one way to avoid the extra copying without synchronizing the modified times.
The "is your shell clean" message and the "protocol mismatch" message are usually caused by having some sort of program in your .cshrc, .profile, .bashrc or equivalent file that writes a message every time you connect using a remote-shell program (such as ssh or rsh). Data written in this way corrupts the rsync data stream. rsync detects this at startup and produces those error messages. However, if you are using rsync-daemon syntax (host::path or rsync://) without using a remote-shell program (no --rsh or -e option), there is not remote-shell program involved, and the problem is probably caused by an error on the daemon side (so check the daemon logs).
A good way to test if your remote-shell connection is clean is to try something like this (use ssh or rsh, as appropriate):
ssh remotesystem /bin/true > test.dat
That should create a file called test.dat with nothing in it. If test.dat is not of zero length then your shell is not clean. Look at the contents of test.dat to see what was sent. Look at all the startup files on remotesystem to try and find the problem.
Yes, rsync uses a lot of memory. The majority of the memory is used to hold the list of files being transferred. This takes about 100 bytes per file, so if you are transferring 800,000 files then rsync will consume about 80M of memory. It will be higher if you use -H or --delete.
To fix this requires a major rewrite of rsync, which my or may not happen.
The usual reason for "out of memory" when running rsync is that you are transferring a _very_ large number of files. The size of the files doesn't matter, only the total number of files.
As a rule of thumb you should expect rsync to consume about 100 bytes per file in the file list. This happens because rsync builds a internal file list structure containing all the vital details of each file. rsync needs to hold structure in memory because it is being constantly traversed.
A future version of rsync could be built with an improved protocol that transfers files in a more incremental fashion, which would require a lot less memory. Unfortunately, such an rsync does not yet exist.
If you have a setup where there is no way to directly connect two systems for an rsync transfer, there are several ways to get a firewall system to act as an intermediary in the transfer. You'll find full details on the page.
On some systems (notably SunOS4) cron supplies what looks like a socket to rsync, so rsync thinks that stdin is a socket. This means that if you start rsync with the --daemon switch from a cron job you end up rsync thinking it has been started from inetd. The fix is simple—just redirect stdin from /dev/null in your cron job.
This error is produced when the remote shell is unable to locate the rsync binary in your path. There are 3 possible solutions:
install rsync in a "standard" location that is in your remote path.
modify your .cshrc, .bashrc etc on the remote system to include the path that rsync is in
use the --rsync-path option to explicitly specify the path on the remote system where rsync is installed
You may echo find the command:
ssh host 'echo $PATH'
for determining what your remote path is.
Can rsync copy files with spaces in them?
Short answer: Yes, rsync can handle filenames with spaces.
Long answer:
Rsync handles spaces just like any other unix command line application. Within the code spaces are treated just like any other character so a filename with a space is no different from a filename with any other character in it.
The problem of spaces is in the argv processing done to interpret the command line. As with any other unix application you have to escape spaces in some way on the command line or they will be used to separate arguments.
It is slightly trickier in rsync (and other remote-copy programs like scp) because rsync sends a command line to the remote system to launch the peer copy of rsync (this assumes that we're not talking about daemon mode, which is not affected by this problem because no remote shell is involved in the reception of the filenames). The command line is interpreted by the remote shell and thus the spaces need to arrive on the remote system escaped so that the shell doesn't split such filenames into multiple arguments.
For example:
rsync -av host:'a long filename' /tmp/
This is usually a request for rsync to copy 3 files from the remote system, "a", "long", and "filename" (the only exception to this is for a system running a shell that does not word-split arguments in its commands, and that is exceedingly rare). If you wanted to request a single file with spaces, you need to get some kind of space-quoting characters to the remote shell that is running the remote rsync command. The following commands should all work:
rsync -av host:'"a long filename"' /tmp/ rsync -av host:'a\ long\ filename' /tmp/ rsync -av host:a\\\ long\\\ filename /tmp/
You might also like to use a '?' in place of a space as long as there are no other matching filenames than the one with spaces (since '?' matches any character):
rsync -av host:a?long?filename /tmp/
As long as you know that the remote filenames on the command line are interpreted by the remote shell then it all works fine.
Some folks would like to ignore the "vanished files" warning, which manifests as an exit-code 24. The easiest way to do this is to create a shell script wrapper. For instance, name this something like "rsync-no24":
#!/bin/sh rsync "$@" e=$? if test $e = 24; then exit 0 fi exit $e
If you get "Read-only file system" as an error when sending to a rsync daemon then you probably forgot to set "read only = no" for that module.