1

如何获得 1500 条推文?我尝试了 page 参数,发现这不起作用,我现在被 max_id 和 since_id 卡住了。我不知道 max_id 和 since_id。如果进行查询,我想获得自发送查询以来最近的 1500 条推文。这是我的代码:

# -*- coding: utf-8 -*-
import urllib
import simplejson

def searchTweets(query):
 search = urllib.urlopen("http://search.twitter.com/search.json?q="+query)
 dict = simplejson.loads(search.read())
 counter = 0
 for result in dict["results"]: 
  print "*",result["text"].encode('utf-8')
  counter += 1
 print "\n",counter," tweets found","\n" 

searchTerm = "steak"
searchTweets(searchTerm+"&rpp=100&page=15")

有谁知道解决方案?

4

1 回答 1

1

让这个为我工作了 1200 条推文:

# -*- coding: utf-8 -*-
import urllib
import simplejson

def searchTweets(query, minimum_tweets):
  results = []
  i=0
  while len(results)<minimum_tweets:
    if i==0: # First time through don't include max id
        response = urllib.urlopen("http://search.twitter.com/search.json?q="+query+"&rpp=100")
    else: # Subsequent times include max id
        response = urllib.urlopen("http://search.twitter.com/search.json?q="+query+"&rpp=100&max_id="+max_id)
    response = simplejson.loads(response.read())
    if not response['results']: break # Break if no tweets are returned
    max_id = str(long(response['results'][-1]['id_str'])-1) # Define max_id for next iteration
    results.extend(response['results']) # Extend tweets to results array
    i += 1

  print "\n",len(results)," tweets found","\n" 

searchTerm = "steak"
searchTweets(searchTerm, 1200)

它的问题是搜索 twitter API 经常中断,并且没有错误处理,或者在这里重试。但它应该向您展示 max_id 背后的逻辑。我使 max_id 比被拉出的最后一条推文的 id 小一,因此没有任何重复。

此外,肯定有更优雅的方式来决定是否在 url 中包含 max_id。这个解决方案是因为max_id 没有默认值(我希望:p)

于 2012-08-23T15:26:11.277 回答