wget还是挺强的,两条命令就可以下载页面中所有的链接了,不用你一条一条的另存了,而且还包括一些隐藏的也可以下载下来。如js文件和css文件 命令如下:
$wget $wget -i index.html -F -B |
参数的详细描述如下:
-i file
--input-file=file
Read URLs from file, in which case no URLs need to be on the com-
mand line. If there are URLs both on the command line and in an
input file, those on the command lines will be the first ones to be
retrieved. The file need not be an HTML document (but no harm if
it is)---it is enough if the URLs are just listed sequentially.
However, if you specify --force-html, the document will be regarded
as html. In that case you may have problems with relative links,
which you can solve either by adding "" to the
documents or by specifying --base=url on the command line.
-F
--force-html
When input is read from a file, force it to be treated as an HTML
file. This enables you to retrieve relative links from existing
HTML files on your local disk, by adding "" to
HTML, or using the --base command-line option.
-B URL
--base=URL
When used in conjunction with -F, prepends URL to relative links in
the file specified by -i.
下载文件夹:
如果目标文件是基于http的文件夹,在windows的 windows explorer 中打开就是文件,但是linux下用firefox打开就是列表,文件多的时候一个个下肯定不行. 使用以下wget命令:
wget -r -p -k -np ./
简单的介绍一下命令中指定选项的作用.
-nd 不创建目录, wget默认会创建一个目录
-r 递归下载
-l1 (L one) 递归一层,只下载指定文件夹中的内容, 不下载下一级目录中的.
–no-parent 不下载父目录中的文件
阅读(3741) | 评论(0) | 转发(0) |