无聊之人--除了技术,还是技术,你懂得
分类: Python/Ruby
2011-08-25 19:01:25
8.3. Extracting data from HTML documents
To extract data from HTML documents, subclass the SGMLParser class and define methods for each tag or entity you want to capture.
为了从html文档中提取数据,子类化类SGMLParser,然后对每一个你所打算捕捉的标签或是实体定义方法。
The first step to extracting data from an HTML document is getting some HTML. If you have some HTML lying around on your hard drive, you can use file functions to read it, but the real fun begins when you get HTML from live web pages.
从html文件抽取数据的第一步是获取一些html文件。如果你的硬盘上面已经存在一些html文件,你可以使用文件函数来读入它,但是真正的乐趣是从获取真正的网页开始的。
Example 8.5. Introducing urllib
例8.5 urllib 简介
The urllib module is part of
the standard Python library. It contains functions for getting
information about and actually retrieving data from Internet-based URLs
(mainly web pages). Urllib模块是标准Python库的一部分。它包含从因特网上面获取信息以及检索信息的函数(主要是网页)。 The simplest use of urllib is
to retrieve the entire text of a web page using
the urlopen function. Opening a URL is similar to opening a file. The return value of urlopen is a file-like
object, which has some of the same methods as a file object Urllib最简单的使用方法就是使用urlopen方法检索出全本的文本一个网页上面的全部文本信息。打开一个URL同打开一个文件是类似的。Urlopen的返回值是一个同文件相类似的对象,它包含一些同文件对象相类似的方法。 The simplest thing to do with the
file-like object returned by urlopen is read, which reads the
entire HTML of the web page into a single string. The object also
supportsreadlines, which reads the text line by line into a list. 对于urlopen函数返回到的同文件类似的对象最简单的操作就是读取它的内容,它将整个网页的html内容放到一个字符串中。该函数还支持readline,它一行一行的读取文本,然后将读到的文本存放在一个列表中。 When you're done with the object, make
sure to close it, just like a normal file object 当你处理完该对象,确保你已经关闭它,就如对通常的文件一样。. You now have the complete HTML of
the home page of in a string, and you're
ready to parse it. 现在你已经有了 主页上网页内容的一份完全拷贝,你已经准备好去解析它了。
Example 8.6. Introducing urllister.py
例8.6 urllister.py简介
If you have not already done so, you can used in this book.
如果你还没有下载,你可以在现在本书中用到的所有代码。
reset is called by the __init__ method of SGMLParser, and it can also be called manually once an instance of the parser has been created. So if you need to do any initialization, do it in reset, not in __init__, so that it will be re-initialized properly when someone re-uses a parser instance. SGMLParser类的__init__方法调用了reset方法。它同样可以在一个解析类的实例在创建是手工创建。因此如果你想进行任何初始化,就可以在reset中完成,而不是在__init__,这样当有些人在再次使用解析类的实例时它就可以再次被合适的初始化。 |
|
start_a is called by SGMLParser whenever it finds an tag. The tag may contain an href attribute, and/or other attributes, like name or title. The attrs parameter is a list of tuples, [(attribute, value), (attribute, value), ...]. Or it may be just an , a valid (if useless) HTML tag, in which case attrs would be an empty list. 当SGMLparser类发现了一个标签时,它将调用start_a方法。该标签可能标签包含href属性,或是其它的属性,比如名字或是标题。Attrs参数是一个tuple类型的列表[(attribute,value),(attribute,value),…].同样它有可能近只是一个,一个合乎语法(如果没用任何用途)的html标签,在这情况下attrs将会是一个空列表。 |
|
You can find out whether this tag has an href attribute with a simple multi-variable list comprehension. |
|
String comparisons like k=='href' are always case-sensitive, but that's safe in this case, because SGMLParser converts attribute names to lowercase while building attrs. 字符串在进行比较的时候,如k==’href’,通常是区分大小写的,但是在本例中这种比较是安全的,这是因为SGMLParser在构建attrs的时候会将属性名字转换成小写形式。 |
Example 8.7. Using urllister.py
例8.7 使用urllister.py
... rest of output omitted for brevity ...
考虑到间接性,忽略其它的输出。
Call the feed method, defined in SGMLParser, to get HTML into the parser.[1] It takes a string, which is what usock.read() returns. 调用定义在SGMLParser中的feed方法,将html文档输入parser。它接受一个字符串,该字符串是usock.read()方法的返回值。 |
|
Like files, you should close your URL objects as soon as you're done with them. 同文件的操作类似,当你处理完,它们以后你应该关闭你的URL对象。 |
|
You should close your parser object, too, but for a different reason. You've read all the data and fed it to the parser, but the feed method isn't guaranteed to have actually processed all the HTML you give it; it may buffer it, waiting for more. Be sure to call close to flush the buffer and force everything to be fully parsed. 同样,你应该关闭你的parer对象,但是出于另一个不同的原因,你已经读取所有的数据并将它输出给了解析器,但是feed方法并不能保证实际上已经上处理了所有的你所给予的html文件:它可能缓冲它,等待更多的内容。请确保你已经调用close方法来刷洗缓冲从而强制所有的内容都被解析。 |
|
Once the parser is closed, the parsing is complete, and parser.urls contains a list of all the linked URLs in the HTML document. (Your output may look different, if the download links have been updated by the time you read this.) |
一旦解析器被关闭,解析也就完成,parser.urls包含html文档中所有的外链URL的列表。(你的输出可能不同,假如你读取该文档的时候,文档中的外链已经被更新过)