0

我写了一个小工具,使用 api 从 facebook 收集数据。该工具使用多处理、队列和 httplib 模块。这是代码的一部分:

主要流程:

def extract_and_save(args):
    put_queue = JoinableQueue()
    get_queue = Queue()

    for index in range(args.number_of_processes):
        process_name = u"facebook_worker-%s" % index
        grabber = FacebookGrabber(get_queue=put_queue, put_queue=get_queue, name=process_name)
        grabber.start()

    friend_list = get_user_friends(args.default_user_id, ["id"])
    for index, friend_id in enumerate(friend_list):
        put_queue.put(friend_id)

    put_queue.join()
    if not get_queue.empty():
        ... save to database ...
    else:
        logger.info(u"There is no data to save")

工作进程:

class FacebookGrabber(Process):
    def __init__(self, *args, **kwargs):
        self.connection = httplib.HTTPSConnection("graph.facebook.com", timeout=2)
        self.get_queue = kwargs.pop("get_queue")
        self.put_queue = kwargs.pop("put_queue")
        super(FacebookGrabber, self).__init__(*args, **kwargs)
        self.daemon = True

    def run(self):
        while True:
            friend_id = self.get_queue.get(block=True)
            try:
                friend_obj = self.get_friend_obj(friend_id)
            except Exception, e:
                logger.info(u"Friend id %s: facebook responded with an error (%s)", friend_id, e)
            else:
                if friend_obj:
                    self.put_queue.put(friend_obj)
            self.get_queue.task_done()

常用代码:

def get_json_from_facebook(connection, url, kwargs=None):
    url_parts = list(urlparse.urlparse(url))
    query = dict(urlparse.parse_qsl(url_parts[4]))
    if kwargs:
        query.update(kwargs)
    url_parts[4] = urllib.urlencode(query)
    url = urlparse.urlunparse(url_parts)
    try:
        connection.request("GET", url)
    except Exception, e:
        print "<<<", e

    response = connection.getresponse()
    data = json.load(response)
    return data

这段代码在 Ubuntu 上完美运行。但是当我尝试在 Windows 7 上运行它时,我收到消息“没有要保存的数据”。问题在这里:

try:
    connection.request("GET", url)
except Exception, e:
    print "<<<", e

我得到下一个错误:<<< a float is required

有谁知道,如何解决这个问题?

Python版本:2.7.5

4

1 回答 1

2

套接字超时值偶尔会发生的“陷阱”之一是大多数操作系统都期望它们为浮点数。我相信这已经在更高版本的 linux 内核中得到了解决。

尝试将:更改 self.connection = httplib.HTTPSConnection("graph.facebook.com", timeout=2) 为: self.connection = httplib.HTTPSConnection("graph.facebook.com", timeout=2.0)

顺便说一句,那是2秒。默认值通常为 5 秒。可能有点低。

于 2013-08-29T21:59:58.710 回答