Chinaunix首页 | 论坛 | 博客
  • 博客访问: 2445092
  • 博文数量: 328
  • 博客积分: 4302
  • 博客等级: 上校
  • 技术积分: 5486
  • 用 户 组: 普通用户
  • 注册时间: 2010-07-01 11:14
个人简介

悲剧,绝对的悲剧,悲剧中的悲剧。

文章分类

全部博文(328)

文章存档

2017年(6)

2016年(18)

2015年(28)

2014年(73)

2013年(62)

2012年(58)

2011年(55)

2010年(28)

分类: Python/Ruby

2016-07-11 21:31:17

First, make sure that:

  • Requests is 
  • Requests is 

Let's get started with some simple examples.

Make a Request

Making a request with Requests is very simple.

Begin by importing the Requests module:

>>> import requests 

Now, let's try to get a webpage. For this example, let's get GitHub's public timeline

>>> r = requests.get('') 

Now, we have a  object called r. We can get all the information we need from this object.

Requests' simple API means that all forms of HTTP request are as obvious. For example, this is how you make an HTTP POST request:

>>> r = requests.post('', data = {'key':'value'}) 

Nice, right? What about the other HTTP request types: PUT, DELETE, HEAD and OPTIONS? These are all just as simple:

>>> r = requests.put('', data = {'key':'value'}) 
>>> r = requests.delete('') 
>>> r = requests.head('') 
>>> r = requests.options('') 

That's all well and good, but it's also only the start of what Requests can do.

Passing Parameters In URLs

You often want to send some sort of data in the URL's query string. If you were constructing the URL by hand, this data would be given as key/value pairs in the URL after a question mark, e.g.httpbin.org/get?key=val. Requests allows you to provide these arguments as a dictionary, using the params keyword argument. As an example, if you wanted to pass key1=value1 andkey2=value2 to httpbin.org/get, you would use the following code:

>>> payload = {'key1': 'value1', 'key2': 'value2'} 
>>> r = requests.get('', params=payload) 

You can see that the URL has been correctly encoded by printing the URL:

>>> print(r.url) ?key2=value2&key1=value1 

Note that any dictionary key whose value is None will not be added to the URL's query string.

You can also pass a list of items as a value:

>>> payload = {'key1': 'value1', 'key2': ['value2', 'value3']} 
>>> r = requests.get('', params=payload) 
>>> print(r.url) ?key1=value1&key2=value2&key2=value3 

Response Content

We can read the content of the server's response. Consider the GitHub timeline again:

>>> import requests 
>>> r = requests.get('') 
>>> r.text u'[{"repository":{"open_issues":0,"url":" 

Requests will automatically decode content from the server. Most unicode charsets are seamlessly decoded.

When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you accessr.text. You can find out what encoding Requests is using, and change it, using the r.encodingproperty:

>>> r.encoding 'utf-8' 
>>> r.encoding = 'ISO-8859-1' 

If you change the encoding, Requests will use the new value of r.encoding whenever you callr.text. You might want to do this in any situation where you can apply special logic to work out what the encoding of the content will be. For example, HTTP and XML have the ability to specify their encoding in their body. In situations like this, you should use r.content to find the encoding, and then set r.encoding. This will let you use r.text with the correct encoding.

Requests will also use custom encodings in the event that you need them. If you have created your own encoding and registered it with the codecs module, you can simply use the codec name as the value of r.encoding and Requests will handle the decoding for you.

Binary Response Content

You can also access the response body as bytes, for non-text requests:

>>> r.content b'[{"repository":{"open_issues":0,"url":" 

The gzip and deflate transfer-encodings are automatically decoded for you.

For example, to create an image from binary data returned by a request, you can use the following code:

>>> from PIL import Image 
>>> from StringIO import StringIO 
>>> i = Image.open(StringIO(r.content)) 

JSON Response Content

There's also a builtin JSON decoder, in case you're dealing with JSON data:

>>> import requests 
>>> r = requests.get('') 
>>> r.json() [{u'repository': {u'open_issues': 0, u'url': ' 

In case the JSON decoding fails, r.json raises an exception. For example, if the response gets a 204 (No Content), or if the response contains invalid JSON, attempting r.json raisesValueError: No JSON object could be decoded.

It should be noted that the success of the call to r.json does not indicate the success of the response. Some servers may return a JSON object in a failed response (e.g. error details with HTTP 500). Such JSON will be decoded and returned. To check that a request is successful, user.raise_for_status() or check r.status_code is what you expect.

Raw Response Content

In the rare case that you'd like to get the raw socket response from the server, you can accessr.raw. If you want to do this, make sure you set stream=True in your initial request. Once you do, you can do this:

>>> r = requests.get('', stream=True) 
>>> r.raw  
>>> r.raw.read(10) '\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03' 

In general, however, you should use a pattern like this to save what is being streamed to a file:

with open(filename, 'wb') as fd: for chunk in r.iter_content(chunk_size): fd.write(chunk) 

Using Response.iter_content will handle a lot of what you would otherwise have to handle when using Response.raw directly. When streaming a download, the above is the preferred and recommended way to retrieve the content.

Custom Headers

If you'd like to add HTTP headers to a request, simply pass in a dict to the headers parameter.

For example, we didn't specify our user-agent in the previous example:

>>> url = '' 
>>> headers = {'user-agent': 'my-app/0.0.1'} 
>>> r = requests.get(url, headers=headers) 

Note: Custom headers are given less precedence than more specific sources of information. For instance:

  • Authorization headers set with headers= will be overridden if credentials are specified in.netrc, which in turn will be overridden by the auth= parameter.
  • Authorization headers will be removed if you get redirected off-host.
  • Proxy-Authorization headers will be overridden by proxy credentials provided in the URL.
  • Content-Length headers will be overridden when we can determine the length of the content.

Furthermore, Requests does not change its behavior at all based on which custom headers are specified. The headers are simply passed on into the final request.

More complicated POST requests

Typically, you want to send some form-encoded data — much like an HTML form. To do this, simply pass a dictionary to the data argument. Your dictionary of data will automatically be form-encoded when the request is made:

>>> payload = {'key1': 'value1', 'key2': 'value2'} 
>>> r = requests.post("", data=payload) 
>>> print(r.text) {  ...  "form": {  "key2": "value2",  "key1": "value1"  },  ... } 

There are many times that you want to send data that is not form-encoded. If you pass in astring instead of a dict, that data will be posted directly.

For example, the GitHub API v3 accepts JSON-Encoded POST/PATCH data:

>>> import json 
>>> url = '' 
>>> payload = {'some': 'data'} 
>>> r = requests.post(url, data=json.dumps(payload)) 

Instead of encoding the dict yourself, you can also pass it directly using the json parameter (added in version 2.4.2) and it will be encoded automatically:

>>> url = '' 
>>> payload = {'some': 'data'} 
>>> r = requests.post(url, json=payload) 

POST a Multipart-Encoded File

Requests makes it simple to upload Multipart-encoded files:

>>> url = '' 
>>> files = {'file': open('report.xls', 'rb')} 
>>> r = requests.post(url, files=files) 
>>> r.text {  ...  "files": {  "file": ""  },  ... } 

You can set the filename, content_type and headers explicitly:

>>> url = '' 
>>> files = {'file': ('report.xls', open('report.xls', 'rb'), 'application/vnd.ms-excel', {'Expires': '0'})} 
>>> r = requests.post(url, files=files) 
>>> r.text {  ...  "files": {  "file": ""  },  ... } 

If you want, you can send strings to be received as files:

>>> url = '' 
>>> files = {'file': ('report.csv', 'some,data,to,send\nanother,row,to,send\n')} 
>>> r = requests.post(url, files=files) 
>>> r.text {  ...  "files": {  "file": "some,data,to,send\\nanother,row,to,send\\n"  },  ... } 

In the event you are posting a very large file as a multipart/form-data request, you may want to stream the request. By default, requests does not support this, but there is a separate package which does - requests-toolbelt. You should read the toolbelt's documentation for more details about how to use it.

For sending multiple files in one request refer to the  section.

Warning

It is strongly recommended that you open files in . This is because Requests may attempt to provide the Content-Length header for you, and if it does this value will be set to the number of bytes in the file. Errors may occur if you open the file in text mode.

Response Status Codes

We can check the response status code:

>>> r = requests.get('') 
>>> r.status_code 200 

Requests also comes with a built-in status code lookup object for easy reference:

>>> r.status_code == requests.codes.ok True 

If we made a bad request (a 4XX client error or 5XX server error response), we can raise it with:

>>> bad_r = requests.get('') 
>>> bad_r.status_code 404 >>> bad_r.raise_for_status() Traceback (most recent call last): File "requests/models.py", line 832, in raise_for_status raise http_error requests.exceptions.HTTPError: 404 Client Error 

But, since our status_code for r was 200, when we call raise_for_status() we get:

>>> r.raise_for_status() None 

All is well.

Response Headers

We can view the server's response headers using a Python dictionary:

>>> r.headers {  'content-encoding': 'gzip',  'transfer-encoding': 'chunked',  'connection': 'close',  'server': 'nginx/1.0.4',  'x-runtime': '148ms',  'etag': '"e1ca502697e5c9317743dc078f67693f"',  'content-type': 'application/json' } 

The dictionary is special, though: it's made just for HTTP headers. According to , HTTP Header names are case-insensitive.

So, we can access the headers using any capitalization we want:

>>> r.headers['Content-Type'] 'application/json' 
>>> r.headers.get('content-type') 'application/json' 

It is also special in that the server could have sent the same header multiple times with different values, but requests combines them so they can be represented in the dictionary within a single mapping, as per :

A recipient MAY combine multiple header fields with the same field name into one "field-name: field-value" pair, without changing the semantics of the message, by appending each subsequent field value to the combined field value in order, separated by a comma.

Cookies

If a response contains some Cookies, you can quickly access them:

>>> url = '' 
>>> r = requests.get(url) 
>>> r.cookies['example_cookie_name'] 'example_cookie_value' 

To send your own cookies to the server, you can use the cookies parameter:

>>> url = '' 
>>> cookies = dict(cookies_are='working') 
>>> r = requests.get(url, cookies=cookies) 
>>> r.text '{"cookies": {"cookies_are": "working"}}'
						

Session 

Session could used to login, and then visit other pages
session = requests.Session()
						
session.get(url, headers=headers)
session.cookies.set('name', 'value', 'domain')
session.post(url, headers=headers, data=data)

Redirection and History

By default Requests will perform location redirection for all verbs except HEAD.

We can use the history property of the Response object to track redirection.

The  list contains the  objects that were created in order to complete the request. The list is sorted from the oldest to the most recent response.

For example, GitHub redirects all HTTP requests to HTTPS:

>>> r = requests.get('') 
>>> r.url '' 
>>> r.status_code 200 
>>> r.history [] 

If you're using GET, OPTIONS, POST, PUT, PATCH or DELETE, you can disable redirection handling with the allow_redirects parameter:

>>> r = requests.get('', allow_redirects=False) 
>>> r.status_code 301 >>> r.history [] 

If you're using HEAD, you can enable redirection as well:

>>> r = requests.head('', allow_redirects=True) 
>>> r.url '' 
>>> r.history [] 

Timeouts

You can tell Requests to stop waiting for a response after a given number of seconds with thetimeout parameter:

>>> requests.get('', timeout=0.001) Traceback (most recent call last): File "", line 1, in  requests.exceptions.Timeout: HTTPConnectionPool(host='github.com', port=80): Request timed out. (timeout=0.001) 

Note

timeout is not a time limit on the entire response download; rather, an exception is raised if the server has not issued a response for timeout seconds (more precisely, if no bytes have been received on the underlying socket for timeout seconds).

Errors and Exceptions

In the event of a network problem (e.g. DNS failure, refused connection, etc), Requests will raise a ConnectionError exception.

 will raise an HTTPError if the HTTP request returned an unsuccessful status code.

If a request times out, a Timeout exception is raised.

If a request exceeds the configured number of maximum redirections, a TooManyRedirectsexception is raised.

All exceptions that Requests explicitly raises inherit fromrequests.exceptions.RequestException.


Re-post from 

阅读(1718) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~