Python标准库中提供了:urllib、urllib2、httplib等模块以供Http请求,但是不怎么好用。
Requests 是使用 Apache2 Licensed 许可证的 基于Python开发的HTTP 库,其在Python内置模块的基础上进行了高度的封装,从而使得Pythoner进行网络请求时,变得美好了许多,使用Requests可以轻而易举的完成浏览器可有的任何操作。
官方文档:
安装
pip install requests
使用
GET请求
无参形式
import requestsresponse = requests.get("http://wwww.baidu.com")print(response.url)print(response.text)
有参形式
import requestspayload = { 'key1': 'value1', 'key2': 'value2'}response = requests.get("http://wwww.baidu.com", params=payload)print(response.url) # http://www.baidu.com/?key1=value1&key2=value2print(response.text)
POST请求
基本POST实例
import requestspayload = { 'key1': 'value1', 'key2': 'value2'}response = requests.post("http://wwww.baidu.com", data=payload)print(response.url) print(response.text)
发送请求头和数据实例
import requestsimport jsonurl = "http://wwww.baidu.com"payload = { 'key1': 'value1', 'key2': 'value2'}headers = { 'content-type': 'application/json'}response = requests.post(url, data=json.dumps(payload), headers=headers)print(response.url)print(response.text)print(response.cookies) #]>
注:如果请求体中有内容,它会先到请求头中看content-type的值是什么
其他请求
常用的request方法如下:
requests.get(url, params=None, **kwargs)requests.post(url, data=None, json=None, **kwargs)requests.put(url, data=None, **kwargs)requests.head(url, **kwargs)requests.delete(url, **kwargs)requests.patch(url, data=None, **kwargs)requests.options(url, **kwargs)
但是实际上他们都是来自于一个方法:
requests.request(method, url, **kwargs)
源码中的request方法
def request(method, url, **kwargs): """Constructs and sends a :class:`Request`. :param method: method for the new :class:`Request` object. :param url: URL for the new :class:`Request` object. :param params: (optional) Dictionary or bytes to be sent in the query string for the :class:`Request`. :param data: (optional) Dictionary or list of tuples ``[(key, value)]`` (will be form-encoded), bytes, or file-like object to send in the body of the :class:`Request`. :param json: (optional) A JSON serializable Python object to send in the body of the :class:`Request`. :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`. :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`. :param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload. ``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')`` or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers to add for the file. :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth. :param timeout: (optional) How many seconds to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) ` tuple. :type timeout: float or tuple :param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``. :type allow_redirects: bool :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy. :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use. Defaults to ``True``. :param stream: (optional) if ``False``, the response content will be immediately downloaded. :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair. :return: :class:`Response ` object :rtype: requests.Response Usage:: >>> import requests >>> req = requests.request('GET', 'http://httpbin.org/get') """ # By using the 'with' statement we are sure the session is closed, thus we # avoid leaving sockets open which can trigger a ResourceWarning in some # cases, and look like a memory leak in others. with sessions.Session() as session: return session.request(method=method, url=url, **kwargs)
常用request方法参数的说明和演示
import requests# method:提交方式# url:提交地址requests.request(method='get', url='http://www.baidu.com')requests.request(method='post', url='http://www.baidu.com')# ------------------------------------------------------------------------# params:get请求在url中传递的参数,内容可以是字典,字符串,字节(ascii编码以内)requests.request(method='get', url='http://127.0.0.1:8000/test/', params={ 'k1': 'v1', 'k2': '水电费'})requests.request(method='get', url='http://127.0.0.1:8000/test/', params="k1=v1&k2=水电费&k3=v3&k3=vv3")requests.request(method='get', url='http://127.0.0.1:8000/test/', params=bytes("k1=v1&k2=k2&k3=v3&k3=vv3", encoding='utf8'))# 错误requests.request(method='get', url='http://127.0.0.1:8000/test/', params=bytes("k1=v1&k2=水电费&k3=v3&k3=vv3", encoding='utf8'))# ------------------------------------------------------------------------# data:在请求体中传递数据,可以是字典,字符串,字节,文件对象requests.request(method='POST', url='http://127.0.0.1:8000/test/', data={ 'k1': 'v1', 'k2': '水电费'})requests.request(method='POST', url='http://127.0.0.1:8000/test/', data="k1=v1; k2=v2; k3=v3; k3=v4" )requests.request(method='POST', url='http://127.0.0.1:8000/test/', data="k1=v1;k2=v2;k3=v3;k3=v4", headers={ 'Content-Type': 'application/x-www-form-urlencoded'} )requests.request(method='POST', url='http://127.0.0.1:8000/test/', data=open('data_file.py', mode='r', encoding='utf-8'), # 文件内容是:k1=v1;k2=v2;k3=v3;k3=v4 headers={ 'Content-Type': 'application/x-www-form-urlencoded'} )# ------------------------------------------------------------------------# json:在请求体中传递json数据# 将json中对应的数据进行序列化成一个字符串,json.dumps(...)# 然后发送到服务器端的body中,并且Content-Type是 {'Content-Type': 'application/json'}requests.request(method='POST', url='http://127.0.0.1:8000/test/', json={ 'k1': 'v1', 'k2': '水电费'})# ------------------------------------------------------------------------# headers:设置请求头,比较重要的参数有Refere:用来记录上次登陆时的网址,User-Agent:记录的是你用什么登陆的,什么浏览器护或者什么系统。requests.request(method='POST', url='http://127.0.0.1:8000/test/', json={ 'k1': 'v1', 'k2': '水电费'}, headers={ 'Content-Type': 'application/x-www-form-urlencoded'} )# ------------------------------------------------------------------------# cookies:cookie上传,cookie一般放到请求头中requests.request(method='POST', url='http://127.0.0.1:8000/test/', data={ 'k1': 'v1', 'k2': 'v2'}, cookies={ 'cook1': 'value1'}, )# 也可以使用CookieJar(字典形式就是在此基础上封装)from http.cookiejar import CookieJarfrom http.cookiejar import Cookieobj = CookieJar()obj.set_cookie(Cookie(version=0, name='c1', value='v1', port=None, domain='', path='/', secure=False, expires=None, discard=True, comment=None, comment_url=None, rest={ 'HttpOnly': None}, rfc2109=False, port_specified=False, domain_specified=False, domain_initial_dot=False, path_specified=False) )requests.request(method='POST', url='http://127.0.0.1:8000/test/', data={ 'k1': 'v1', 'k2': 'v2'}, cookies=obj)# ------------------------------------------------------------------------# files:文件上传file_dict = { 'f1': open('readme', 'rb')}requests.request(method='POST', url='http://127.0.0.1:8000/test/', files=file_dict)# 发送文件,定制文件名file_dict = { 'f1': ('test.txt', open('readme', 'rb'))}requests.request(method='POST', url='http://127.0.0.1:8000/test/', files=file_dict)# 发送文件,定制文件名file_dict = { 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf")}requests.request(method='POST', url='http://127.0.0.1:8000/test/', files=file_dict)# 发送文件,定制文件名file_dict = { 'f1': ('test.txt', "hahsfaksfa9kasdjflaksdjf", 'application/text', { 'k1': '0'})}requests.request(method='POST', url='http://127.0.0.1:8000/test/', files=file_dict)# ------------------------------------------------------------------------# timeout:请求和响应的超时时间ret = requests.get('http://google.com/', timeout=1) # 设置连接超时时间为1秒print(ret)ret = requests.get('http://google.com/', timeout=(5, 1)) # 设置连接超时时间为5秒,读取超时时间为1秒。print(ret)# ------------------------------------------------------------------------# allow_redirects:是否允许重定向ret = requests.get('http://127.0.0.1:8000/test/', allow_redirects=False)print(ret.text)# ------------------------------------------------------------------------# proxies设置代理,让代理去发请求# auth:认证,比如auth = HTTPBasicAuth("zhangsan", "1234") 会将这个内容拼接然后加密,一般是用base64加密,放到请求头中,键为Authorizationproxies = { "http": "61.172.249.96:80", "https": "http://61.185.219.126:3128",}proxies = { 'http://10.20.1.128': 'http://10.10.1.10:5323'}ret = requests.get("http://www.proxy360.cn/Proxy", proxies=proxies)print(ret.headers)from requests.auth import HTTPProxyAuthproxyDict = { 'http': '77.75.105.165', 'https': '77.75.105.165'}auth = HTTPProxyAuth('username', 'mypassword')r = requests.get("http://www.google.com", proxies=proxyDict, auth=auth)print(r.text)# ------------------------------------------------------------------------# stream:默认是False,如果设置为True的话,它会将请求的文件用迭代的方式一点一点的下载下来,因为如果是下载是先到内存再到硬盘,大文件可能内存不足ret = requests.get('http://127.0.0.1:8000/test/', stream=True)print(ret.content)ret.close()from contextlib import closingwith closing(requests.get('http://httpbin.org/get', stream=True)) as r:# #在此处理响应。for i in r.iter_content(): print(i)# ------------------------------------------------------------------------# Session:自动保存cookiesession = requests.Session() #创建一个session对象# 1、首先登陆任何页面,获取cookiei1 = session.get(url="http://dig.chouti.com/help/service")# 2、用户登陆,携带上一次的cookie,后台对cookie中的 gpsd 进行授权i2 = session.post( url="http://dig.chouti.com/login", data={ 'phone': "8615131255089", 'password': "xxxxxx", 'oneMonth': "" })i3 = session.post( url="http://dig.chouti.com/link/vote?linksId=8589623",)print(i3.text) # ------------------------------------------------------------------------# verify:是否忽略证书,直接进行访问,像12306这样的网站是不用第三方认证的,他们是自己写有关认证的,由于不是第三方认证,浏览器会预警拦截,所以可以将它设置为False,来忽略证书。# cert:证书文件
来自请求的响应常用属性
text与content
首先说两个比较重要的,取响应内容的。
- response.text
- response.content
在某些情况下来说,response.text 与 response.content 都是来获取response中的数据信息,效果看起来差不多。那么response.text 和 response.content 到底有哪些差别 ? 什么情况下该用 response.text 什么情况下该用 response.content ?
返回的数据类型
response.text 返回的是一个 unicode 型的文本数据 response.content 返回的是 bytes 型的二进制数据 也就是说如果想取文本数据可以通过response.text 如果想取图片,文件,则可以通过 response.content数据编码
response.content 返回的是二进制响应内容 response.text 则是默认”iso-8859-1”编码,服务器不指定的话是根据网页的响应来猜测编码。encoding与aparent_encoding
response.encoding是用来给响应设置编码格式的
而aparent_encoding是用来查看当前文档的编码格式的
一个小例子用encoding和aparent_encodinging解决response.text中乱码问题
import requestsfrom bs4 import BeautifulSoupurl = 'http://www.autohome.com.cn/news/'response = requests.get(url)print(response.apparent_encoding) # GB2312response.encoding = response.apparent_encodingsoup = BeautifulSoup(response.text, 'html.parser')print(soup.title.text)
注:上面例子中用到的beautifulsoup,是一个爬虫比较好用的库,有兴趣的可以了解一下。这个代码的作用是获取这个网页的标题。由于这个网页使用的是GB2312,所有用text取出来会有乱码。这个也是我搜集到的一个好用的方法,可以试试看。
其他常用属性
-
response.status_code:拿到响应状态值
-
response.cookies:拿到cookie对象
-
response.cookies.get_dict() #返回响应的cookie的字典。