例子:
http://example.com/?a=text&q2=text2&q3=text3&q2=text4
删除“ q2 ”后,它将返回:
http://example.com/?q=text&q3=text3
在这种情况下,有多个“ q2 ”并且都已被删除。
import sys
if sys.version_info.major == 3:
from urllib.parse import urlencode, urlparse, urlunparse, parse_qs
else:
from urllib import urlencode
from urlparse import urlparse, urlunparse, parse_qs
url = 'http://example.com/?a=text&q2=text2&q3=text3&q2=text4&b#q2=keep_fragment'
u = urlparse(url)
query = parse_qs(u.query, keep_blank_values=True)
query.pop('q2', None)
u = u._replace(query=urlencode(query, True))
print(urlunparse(u))
输出:
http://example.com/?a=text&q3=text3&b=#q2=keep_fragment
要删除所有查询字符串参数:
from urllib.parse import urljoin, urlparse
url = 'http://example.com/?a=text&q2=text2&q3=text3&q2=text4'
urljoin(url, urlparse(url).path) # 'http://example.com/'
对于 Python2,将导入替换为:
from urlparse import urljoin, urlparse
这不只是在字符上拆分字符串的问题吗?
>>> url = http://example.com/?a=text&q2=text2&q3=text3&q2=text4
>>> url = url.split('?')[0]
'http://example.com/'
使用 python 的 url 操作库furl:
import furl
f = furl.furl("http://example.com/?a=text&q2=text2&q3=text3&q2=text4")
f.remove(['q2'])
print(f.url)
query_string = "https://example.com/api/api.php?user=chris&auth=true"
url = query_string[:query_string.find('?', 0)]
或者你可以只使用条
>>> l='http://example.com/?a=text&q2=text2&q3=text3&q2=text4'
>>> l.strip('&q2=text4')
'http://example.com/?a=text&q2=text2&q3=text3'
>>>
或者简单地说,只需使用url_query_cleaner()
fromw3lib.url
from w3lib.url import url_query_cleaner
url = 'http://example.com/?a=text&q2=text2&q3=text3&q2=text4'
url_query_cleaner(url, ('q2'), remove=True)
输出:http://example.com/?a=text&q3=text3
import re
q ="http://example.com/?a=text&q2=text2&q3=text3&q2=text4"
todelete="q2"
#Delete every query string matching the pattern
r = re.sub(r''+todelete+'=[a-zA-Z_0-9]*\&*',r'',q)
#Delete the possible trailing #
r = re.sub(r'&$',r'',r)
print r