3

我正在访问 JSON 文件的特定 URL(来自 stackexchange 和 stackoverflow api)。执行json.loads()命令时显示以下错误:

import urllib2
import json

url = "http://api.stackexchange.com/2.1/tags?order=desc&sort=popular&site=quant&pagesize=100&page=1"    
data = json.loads(urllib2.urlopen(url).read())

<ipython-input-20-7540e91a8ff2> in <module>()
----> 1 data = json.loads(urllib2.urlopen(url).read())

/usr/lib/python2.7/json/__init__.pyc in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
    336             parse_int is None and parse_float is None and
    337             parse_constant is None and object_pairs_hook is None and not kw):
--> 338         return _default_decoder.decode(s)
    339     if cls is None:
    340         cls = JSONDecoder

/usr/lib/python2.7/json/decoder.pyc in decode(self, s, _w)
    363 
    364         """
--> 365         obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    366         end = _w(s, end).end()
    367         if end != len(s):

/usr/lib/python2.7/json/decoder.pyc in raw_decode(self, s, idx)
    381             obj, end = self.scan_once(s, idx)
    382         except StopIteration:
--> 383             raise ValueError("No JSON object could be decoded")
    384         return obj, end

ValueError: No JSON object could be decoded

另一方面,使用 twitter api 一切正常……为什么?

4

2 回答 2

6

正如@Thomas 所说,这是因为gzip 压缩
我建议使用为您担心此类事情的requests库:

import requests

data_url = "https://api.stackexchange.com/2.1/search?page=1&pagesize=10&order=desc&sort=activity&tagged=pandas&site=stackoverflow"
data_json = requests.get(data_url).json()
于 2013-06-10T15:19:24.867 回答
4

StackExchange API总是压缩它的响应,但 Python不会自动解压缩它,所以 json 正在获取 gzip 压缩的数据。

这个答案显示了如何使用 gzip 模块来处理响应。

于 2013-05-14T22:31:29.213 回答