全部博文(2759)
分类: Python/Ruby
2015-09-22 23:35:26
原文地址:python模拟一个浏览器 作者:刘一痕
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
import mechanize
import cookielib
# Browser
br = mechanize.Browser()
# Cookie Jar
cj = cookielib.LWPCookieJar()
br.set_cookiejar(cj)
# Browser options
br.set_handle_equiv( True )
br.set_handle_gzip( True )
br.set_handle_redirect( True )
br.set_handle_referer( True )
br.set_handle_robots( False )
# Follows refresh 0 but not hangs on refresh > 0
br.set_handle_refresh(mechanize._http.HTTPRefreshProcessor(), max_time = 1 )
# Want debugging messages?
#br.set_debug_http(True)
#br.set_debug_redirects(True)
#br.set_debug_responses(True)
# User-Agent (this is cheating, ok?)
br.addheaders = [( 'User-agent' , 'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1' )]
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
|
# Open some site, let's pick a random one, the first that pops in mind:
r = br. open ( '' )
html = r.read()
# Show the source
print html
# or
print br.response().read()
# Show the html title
print br.title()
# Show the response headers
print r.info()
# or
print br.response().info()
# Show the available forms
for f in br.forms():
print f
# Select the first (index zero) form
br.select_form(nr = 0 )
# Let's search
br.form[ 'q' ] = 'weekend codes'
br.submit()
print br.response().read()
# Looking at some results in link format
for l in br.links(url_regex = 'stockrt' ):
print l
|
1
2
3
4
|
# If the protected site didn't receive the authentication data you would
# end up with a 410 error in your face
br.add_password( '' , 'username' , 'password' )
br. open ( '' )
|
1
2
3
4
5
6
7
8
9
10
11
12
|
# Testing presence of link (if the link is not found you would have to
# handle a LinkNotFoundError exception)
br.find_link(text = 'Weekend codes' )
# Actually clicking the link
req = br.click_link(text = 'Weekend codes' )
br. open (req)
print br.response().read()
print br.geturl()
# Back
br.back()
print br.response().read()
print br.geturl()
|
1
2
3
4
|
# Download
f = br.retrieve( 'http://www.google.com.br/intl/pt-BR_br/images/logo.gif' )[ 0 ]
print f
fh = open (f)
|
1
2
3
4
5
6
|
# Proxy and user/password
br.set_proxies({ "http" : "joe:password@myproxy.example.com:3128" })
# Proxy
br.set_proxies({ "http" : "myproxy.example.com:3128" })
# Proxy password
br.add_proxy_password( "joe" , "password" )
|
1
2
3
4
5
6
7
|
# Simple open?
import urllib2
print urllib2.urlopen( '' ).read()
# With password?
import urllib
opener = urllib.FancyURLopener()
print opener. open ( '' ).read()
|
原文来自:%E7%BF%BB%E8%AF%91%E4%BD%BF%E7%94%A8python%E6%A8%A1%E4%BB%BF%E6%B5%8F%E8%A7%88%E5%99%A8%E8%A1%8C%E4%B8%BA/
——————————————————————————————
最后来聊下通过代码访问页面时的一个很重要的概念和技术:cookie
我们都知道HTTP是无连接的状态协议,但是客户端和服务器端需要保持一些相互信息,比如cookie,有了cookie,服务器才能知道刚才是这个用户登录了网站,才会给予客户端访问一些页面的权限。
比如用浏览器登录新浪微博,必须先登录,登陆成功后,打开其他的网页才能够访问。用程序登录新浪微博或其他验证网站,关键点也在于需要保存cookie,之后附带cookie再来访问网站,才能够达到效果。
这里就需要Python的cookielib和urllib2等的配合,将cookielib绑定到urllib2在一起,就能够在请求网页的时候附带cookie。
具体做法,首先第一步,用firefox的httpfox插件,在浏览器衷开始浏览新浪微博首页,然后登陆,从httpfox的记录中,查看每一步发送了那些数据请求了那个URL;之后再python里面,模拟这个过程,用urllib2.urlopen发送用户名密码到登陆页面,获取登陆后的cookie,之后访问其他页面,获取微博数据。
cookielib模块的主要作用是提供可存储cookie的对象,以便于与urllib2模块配合使用来访问Internet资源。例如可以利用本模块的CookieJar类的对象来捕获cookie并在后续连接请求时重新发送。coiokielib模块用到的对象主要有下面几个:CookieJar、FileCookieJar、MozillaCookieJar、LWPCookieJar。
urllib模块和urllib模块类似,用来打开URL并从中获取数据。与urllib模块不同的是,urllib模块不仅可以使用urlopen()函数还可以自定义Opener来访问网页。同时要注意:urlretrieve()函数是urllib模块中的,urllib2模块中不存在该函数。但是使用urllib2模块时一般都离不开urllib模块,因为POST的数据需要使用urllib.urlencode()函数来编码。
cookielib模块一般与urllib2模块配合使用,主要用在urllib2.build_oper()函数中作为urllib2.HTTPCookieProcessor()的参数。使用方法如下面登录人人网的代码:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
#! /usr/bin/env python
#coding=utf-8
import urllib2
import urllib
import cookielib
data = { "email" : "用户名" , "password" : "密码" } #登陆用户名和密码
post_data = urllib.urlencode(data)
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
headers = { "User-agent" : "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1" }
req = urllib2.Request( " class="python plain" style="padding:0px !important;margin:0px !important;font-family:Consolas, 'Bitstream Vera Sans Mono', 'Courier New', Courier, monospace !important;border-radius:0px !important;border:0px !important;bottom:auto !important;float:none !important;height:auto !important;left:auto !important;line-height:1.1em !important;outline:0px !important;overflow:visible !important;position:static !important;right:auto !important;top:auto !important;vertical-align:baseline !important;width:auto !important;box-sizing:content-box !important;font-size:1em !important;min-height:auto !important;background:none !important;">,post_data,headers)
content = opener. open (req)
print content.read().decode( "utf-8" ).encode( "gbk" )
|
Python使用cookielib和urllib2模拟登陆新浪微博并抓取数据